CN114363705A - Augmented reality equipment and interaction enhancement method - Google Patents

Augmented reality equipment and interaction enhancement method Download PDF

Info

Publication number
CN114363705A
CN114363705A CN202110535801.8A CN202110535801A CN114363705A CN 114363705 A CN114363705 A CN 114363705A CN 202110535801 A CN202110535801 A CN 202110535801A CN 114363705 A CN114363705 A CN 114363705A
Authority
CN
China
Prior art keywords
display
augmented reality
picture
image
electronic watermark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110535801.8A
Other languages
Chinese (zh)
Inventor
王大勇
王卫明
郝冬宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202110535801.8A priority Critical patent/CN114363705A/en
Publication of CN114363705A publication Critical patent/CN114363705A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application provides augmented reality equipment and an interactive augmentation method, which can acquire a target image in a fusion picture, namely an image currently displayed by display equipment, after a user inputs a control instruction for displaying the fusion picture. And then, performing electronic watermark detection on the target image, and when the target image contains electronic watermark information, directly generating identification object information through the electronic watermark information so as to display the identification object information in a UI (user interface). The interaction enhancement method can acquire part of identification object information through the electronic watermark information, so that the influence of the image brightness, the image quality processing and the like of the display equipment on the identification process can be relieved, and the identification accuracy is improved.

Description

Augmented reality equipment and interaction enhancement method
Technical Field
The application relates to the technical field of augmented reality equipment, in particular to augmented reality equipment and an interactive augmentation method.
Background
Augmented Reality (AR) technology is a technology that merges a virtual picture with a real scene. The generated virtual objects, such as characters, images, three-dimensional models and video pictures, can be simulated and applied to real scene pictures, so that the film watching feeling of combining the virtual pictures and the real pictures is brought to users. The augmented reality device is an intelligent display device applying augmented reality technology, can acquire real scene pictures and virtual pictures in real time during use, and then fuses the virtual pictures and the real scene pictures to present final fused pictures to a user.
When the augmented reality device presents the final fusion picture, some auxiliary information pictures can be displayed in the fusion picture, and the real scene picture and/or the virtual picture are labeled, so that the user can watch and recognize the pictures conveniently, and the user experience is improved. The auxiliary information picture may be a combination of a pattern and a character, for example, when the content of the oil painting work is included in the fusion picture, the name, the author, and the content of the oil painting work may be labeled by talking the pattern of the bubble and the content of the character in the bubble. The pattern content of the auxiliary information picture can be defined by a uniform UI interface, and the text content of the auxiliary information picture can be obtained by Artificial Intelligence (AI) picture classification algorithm.
The AI image classification algorithm may perform image recognition on the image corresponding to the fusion screen, recognize key elements included in the image, and generate text content in the auxiliary information. However, when the fusion picture includes a picture displayed on a screen of an electronic device such as a television or an advertisement player, due to the influence of the brightness and image quality processing of the picture displayed on the screen, the difference between the displayed content and the training scene of the AI picture classification algorithm is large, and therefore, the accuracy of identifying the fusion picture with the screen of the electronic device in the AI picture classification algorithm is reduced.
Disclosure of Invention
The application provides augmented reality equipment and an interactive augmentation method, and aims to solve the problem that traditional augmented reality equipment is low in identification accuracy.
In one aspect, the present application provides an augmented reality device, comprising: display, image acquisition device, communicator and controller. Wherein the display is configured to be a user interface and to display a fusion picture including a real scene picture and a virtual object picture. The image acquisition device is configured to acquire a real scene picture. The communicator is configured to connect to a display device. The controller is configured to perform the following program steps:
receiving a control instruction which is input by a user and used for displaying an augmented reality picture;
responding to the control instruction, and acquiring a target image in the augmented reality picture, wherein the target image is an image displayed on a screen of the display equipment;
if the target image contains electronic watermark information, generating identification object information according to the electronic watermark information;
and controlling the display to display the identification object information.
On the other hand, the application also provides an interactive enhancement method, which is applied to the augmented reality equipment, wherein the augmented reality equipment comprises a display, an image acquisition device, a communicator and a controller; the augmented reality device is connected with the display device through the communicator, and the interactive augmentation method comprises the following steps:
receiving a control instruction which is input by a user and used for displaying an augmented reality picture, wherein the fusion picture comprises a real scene picture and a virtual object picture;
responding to the control instruction, and acquiring a target image in the augmented reality picture, wherein the target image is an image displayed on a screen of the display equipment;
if the target image contains electronic watermark information, generating identification object information according to the electronic watermark information;
and controlling the display to display the identification object information.
According to the technical scheme, the augmented reality device and the interactive augmentation method can acquire the target image in the fusion picture, namely the image currently displayed by the display device, after the user inputs the control instruction for displaying the fusion picture. And then, performing electronic watermark detection on the target image, and when the target image contains electronic watermark information, directly generating identification object information through the electronic watermark information so as to display the identification object information in a UI (user interface). The interaction enhancement method can acquire part of identification object information through the electronic watermark information, so that the influence of the image brightness, the image quality processing and the like of the display equipment on the identification process can be relieved, and the identification accuracy is improved.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a display system including a virtual reality device in an embodiment of the present application;
fig. 2 is a schematic structural diagram of an augmented reality device in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an augmented reality device with a lens in an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating an effect of presenting an augmented reality picture through a projection device and a lens according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a VR scene global interface in an embodiment of the application;
FIG. 6 is a schematic diagram of a recommended content area of a global interface in an embodiment of the present application;
FIG. 7 is a schematic diagram of an application shortcut operation entry area of a global interface in an embodiment of the present application;
FIG. 8 is a schematic diagram of a suspension of a global interface in an embodiment of the present application;
FIG. 9 is a diagram of an AR frame in an embodiment of the present application;
fig. 10 is a schematic diagram illustrating an effect of displaying identification object information in an AR screen in the embodiment of the present application;
FIG. 11 is a flowchart illustrating an interaction enhancing method according to an embodiment of the present application;
fig. 12 is a schematic flowchart illustrating a process of detecting electronic watermark information for a target image according to an embodiment of the present application;
fig. 13 is a schematic flowchart illustrating a process of displaying an identification object according to whether electronic watermark information is included in the embodiment of the present application;
fig. 14 is a schematic view illustrating superimposition display of identification object information in an embodiment of the present application;
FIG. 15 is a schematic diagram illustrating a process of detecting a gradient of a brightness variation in an embodiment of the present application;
fig. 16 is a schematic flowchart of querying electronic watermark information in an embodiment of the present application;
FIG. 17 is a schematic view of a process for obtaining identification object information in an embodiment of the present application;
fig. 18 is a flowchart illustrating displaying identification object information in the embodiment of the present application;
fig. 19 is a schematic flowchart of generating electronic watermark information in the embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the exemplary embodiments of the present application clearer, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, but not all the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments shown in the present application without inventive effort, shall fall within the scope of protection of the present application. Moreover, while the disclosure herein has been presented in terms of exemplary one or more examples, it is to be understood that each aspect of the disclosure can be utilized independently and separately from other aspects of the disclosure to provide a complete disclosure.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances and can be implemented in sequences other than those illustrated or otherwise described herein with respect to the embodiments of the application, for example.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module," as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
Reference throughout this specification to "embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment," or the like, throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, the particular features, structures, or characteristics shown or described in connection with one embodiment may be combined, in whole or in part, with the features, structures, or characteristics of one or more other embodiments, without limitation. Such modifications and variations are intended to be included within the scope of the present application.
In the embodiment of the present application, an Augmented Reality (AR) device is a display device that can be worn on the head of a user and provides an immersive experience for the user. The AR equipment can acquire real scene image pictures in real time and fuse virtual object images in the real scene pictures to form an augmented reality picture effect. The AR devices include, but are not limited to, AR head-mounted devices, AR glasses, wearable AR game consoles, and the like. The augmented reality device may be a standalone device, such as an AR headset. Or a combination of devices having an augmented reality screen display function and formed by combining a plurality of devices. For example, an augmented Reality (VR) device may be combined to form an augmented Reality device by attaching a camera to the VR device.
The augmented reality device described in the embodiment of the present application takes an AR headset as an example to illustrate the technical solution, and it should be understood that the provided technical solution can be applied to other types of virtual reality devices at the same time. The virtual reality device 500 may operate independently or as an external device to access other intelligent display devices, as shown in fig. 1. The display device can be a smart television, a computer, a tablet computer, a server and the like.
The augmented reality device 500 may be worn on the head of the user and acquire real scene pictures in real time after being worn on the face of the user. The acquired real scene picture is displayed in real time, so that close-range images are provided for the two eyes of the user, and the immersion experience is brought. To present a particular display, augmented reality device 500 may include a number of components for displaying and for wearing. Taking the AR headset as an example, the augmented reality device 500 includes, but is not limited to, a housing, a position fixing member, an image capturing device, an optical system, a display component, a posture detection circuit, an interface circuit, and other components. In practical application, the image capturing device may be fixed on the housing or partially embedded in the housing, and is used for capturing a real scene picture right in front of the user. The optical system, the display component, the attitude detection circuit and the interface circuit can be packaged in the shell and used for presenting a specific display picture; position fixtures are provided on both sides of the housing for securing the entire AR device to the head of the user.
The image acquisition device can replace two eyes of a user to record videos of a real scene in front of the front. Thus, the image capturing device may be one or more cameras adapted to the visual range of both eyes of the user. For example, as shown in fig. 2, the image capturing device may be composed of two cameras disposed on the front case. When the device is used, the two cameras can shoot real scene pictures in real time and send the real scene pictures to the display component in a data stream mode for displaying. Wherein the data stream formed by the real scene picture comprises a plurality of continuous frame images.
While the display component displays the real scene picture, the augmented reality device 500 may also add a virtual object to the real scene picture to finally fuse to form the augmented reality picture. To this end, in some embodiments of the present application, the augmented reality device 500 may further include a data processing system such as a controller. After the image acquisition device acquires the real scene picture, the real scene picture can be sent to the controller, the controller is triggered to call the virtual object, and the virtual object picture is obtained through rendering. And then fusing the virtual object picture and the real scene picture to form a final augmented reality picture. And finally, sending the augmented reality picture to a display component for displaying.
The virtual objects added to the real scene picture by the controller include but are not limited to characters, images, three-dimensional models, video pictures and the like. Since the augmented reality device 500 can fuse the real scene picture and the virtual object picture, in order to obtain a better fusion effect, the virtual object picture added to the real scene picture should also have a stereoscopic effect. For example, when a 3D model is added to a real scene picture, as the head motion of the user changes, the view angle of the picture presented by the 3D model can also be changed in real time.
To this end, in some embodiments, a gesture detection circuit may also be built into augmented reality device 500. The gesture detection circuit may be composed of a plurality of pose sensors including, but not limited to, a gravitational acceleration sensor, a gyroscope, and the like. In use, the posture detection circuit can detect head posture data of a user in use in real time and send the head posture data to the controller, so that the controller can render added virtual objects according to the head posture data to form virtual object pictures at different viewing angles.
For example, the augmented reality device 500 may build a rendered scene based on the Unity 3D engine and, in use, load a virtual object model into the rendered scene. The virtual rendering scene also comprises a left display camera, a right display camera and other virtual cameras used for presenting the final rendering effect. The augmented reality device 500 may capture images of virtual objects in a rendered scene by a left display camera and a right display camera to obtain a left-eye picture and a right-eye picture, respectively, and output the left-eye picture and the right-eye picture to a display component for display. When the head of the user rotates, the gesture detection circuit can detect corresponding gesture data and send the gesture data to the controller. And the controller adjusts the shooting angles of the left display camera and the right display camera in the rendering scene according to the attitude data, so as to obtain virtual object pictures at different viewing angles.
To display the final augmented reality picture, the display assembly may include a display and data transmission circuitry associated with the display process. The display component comprises a display assembly and a display module, wherein the display assembly comprises a plurality of displays, and the number of the displays in the display assembly is different for the augmented reality equipment with different display modes. That is, in some embodiments, augmented reality device 500 includes two displays that respectively present AR pictures for both eyes of the user. For example, for an AR headset, the presented AR pictures include a left eye picture and a right eye picture. The display assembly may be comprised of a left display and a right display. The left display may display a left eye picture while displaying the AR picture. The left eye picture is obtained by superposing and fusing a real scene picture shot by the left camera and a virtual object picture shot by the left display camera in a rendering scene. In a similar way, the right display is used for displaying a right-eye picture, and the right-eye picture is obtained by superposing and fusing a real scene picture shot by the right camera and a virtual object picture shot by the right display camera in the rendering scene.
It should be noted that, as shown in fig. 3, for part of the augmented reality device 500, the display may also set the area in front of the two eyes of the user to be in a transparent form instead of displaying the real scene picture shot by the image capture device, so that the user can directly view the real scene picture, and then add the virtual object picture in front of the two eyes of the user by using the transparent display or a projection manner, so as to form the augmented reality picture.
In some embodiments, as shown in fig. 4, the augmented reality device 500 may further include a projection device and a lens. The projection device is used for projecting virtual object pictures; the lens can be formed by a plurality of polarizers into a transflective lens. The formed semi-transparent semi-reflective lens can reflect or transmit light emitted from different directions, so that on one hand, a real scene picture in front of a user can enter the visual field of the user through the lens; on the other hand, the light emitted by the projection device can be reflected, so that the virtual object picture projected by the projection device can enter the visual field of the user through the reflection action on the inner side of the lens.
In order to obtain a better image fusion effect, the thickness of the lens of the augmented reality device 500 is gradually reduced from the top to the bottom, so that the reflected virtual object image light can be parallel to the eye level of the user, and the deformation of the virtual object image in the reflection process is relieved. In addition, the augmented reality apparatus 500 that presents an augmented reality picture through a projection device and a lens also includes a camera. The method is used for shooting the real scene picture, but the shot real scene picture is not displayed and is only used for executing the fusion calculation process of the virtual object occlusion display method.
In some embodiments, the augmented reality device 500 may also include only one display. For example, augmented reality device 500 may consist of one wide-screen display. The wide screen display is divided into two parts, namely a left area and a right area, wherein the left area is used for displaying a left-eye picture, and the right area is used for displaying a right-eye picture.
It should be noted that the real scene picture may be obtained by a camera in the augmented reality device 500 performing image shooting on a use scene of the current user; may also be obtained in other ways. For example, the augmented reality device 500 may be connected to an image capture device or other devices with image capture devices through an interface circuit, and obtain a real scene picture through the connected devices.
Therefore, in some embodiments, the augmented reality device 500 may access the display device 200 and construct a network-based display system with the server 400, and data interaction may be performed among the augmented reality device 500, the display device 200, and the server 400 in real time, for example, the display device 200 may obtain media data from the server 400 and play the media data, and transmit specific picture content to the augmented reality device 500 for display. That is, the real scene picture has a broad sense, and may refer to a real-time scene image captured by an image capture device of the augmented reality device 500, or may be a media asset picture sent to the augmented reality device 500 by the display device 200, the server 400, and the like.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device, among others. The specific display device type, size, resolution, etc. are not limited. Those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired. The display apparatus 200 may provide a broadcast receiving television function and may additionally provide an intelligent network television function of a computer support function, including but not limited to a network television, an intelligent television, an Internet Protocol Television (IPTV), and the like.
The display device 200 and the augmented reality device 500 also perform data communication with the server 400 through a plurality of communication methods. The display device 200 and the augmented reality device 500 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. Illustratively, the display device 200 receives software Program updates, or accesses a remotely stored digital media library by sending and receiving information, and Electronic Program Guide (EPG) interactions. The server 400 may be a cluster or a plurality of clusters. One or more types of servers may be included. Other web service contents such as video on demand and advertisement services are provided through the server 400.
In the course of data interaction, the user may operate the display apparatus 200 through the mobile terminal 300 and the remote controller 100. The mobile terminal 300 and the remote controller 100 may communicate with the display device 200 in a direct wireless connection manner or in an indirect connection manner. That is, in some embodiments, the mobile terminal 300 and the remote controller 100 may communicate with the display device 200 through a direct connection manner such as bluetooth, infrared, etc. When transmitting the control command, the mobile terminal 300 and the remote controller 100 may directly transmit the control command data to the display device 200 through bluetooth or infrared.
In other embodiments, the mobile terminal 300 and the remote controller 100 may also access the same wireless network with the display apparatus 200 through a wireless router to establish indirect connection communication with the display apparatus 200 through the wireless network. When transmitting the control command, the mobile terminal 300 and the remote controller 100 may transmit the control command data to the wireless router first, and then forward the control command data to the display device 200 through the wireless router.
In some embodiments, the user may also interact with the augmented reality device 500 using the mobile terminal 300 and the remote control 100. The connection mode established by the interaction process can be a direct connection mode and an indirect connection mode. For example, the mobile terminal 300 and the remote controller 100 may be used as handles in an augmented reality scene to implement functions such as somatosensory interaction. Also for example, the interaction instructions may be forwarded to the augmented reality device 500 via a wireless router or the display device 200.
In addition to the media asset data described above, the interface circuitry of augmented reality device 500 may be used to communicate interactive data. In practical applications, the augmented reality device 500 may further be connected to other display devices or peripherals through an interface circuit, so as to implement more complex functions through data interaction with the connected devices. For example, the augmented reality device 500 may be further connected to a handle through an interface circuit, and the handle may be operated by being held by a user so as to perform a related operation in the user interface.
The user interface can be presented as a plurality of different types of UI layouts according to user operation. For example, the user interface may include a global UI, as shown in fig. 5, after the AR/VR terminal is started, the global UI may be displayed in a display screen of the AR/VR terminal or a display of the display device. The global UI may include a recommended content area 1, a business class extension area 2, an application shortcut operation entry area 3, and a suspended matter area 4.
The recommended content area 1 is used for configuring the TAB columns of different classifications; media resources, special subjects and the like can be selected and configured in the column; the media assets can include services with media asset contents such as 2D movies, education courses, tourism, 3D, 360-degree panorama, live broadcast, 4K movies, program applications, games, tourism and the like, and the columns can select different template styles and can support simultaneous recommendation and arrangement of media assets and titles, as shown in fig. 6.
In some embodiments, a status bar may be further disposed at the top of the recommended content area 1, and a plurality of display controls may be disposed in the status bar, including common options such as time, network connection status, and power amount. The content included in the status bar may be customized by the user, for example, content such as weather, user's head portrait, etc. may be added to the status bar. The content contained in the status bar may be selected by the user to perform the corresponding function. For example, when the user clicks on the time option, the augmented reality device 500 may display a time setting window in the current interface or jump to a calendar interface. When the user clicks on the network connection status option, the augmented reality device 500 may display a WiFi list on the current interface or jump to a network setup interface.
The content displayed in the status bar may be presented in different content forms according to the setting status of a specific item. For example, the time control may be directly displayed as specific time text information, and display different text at different times; the power control may be displayed as different battery pattern styles according to the current power remaining condition of the augmented reality device 500.
The status bar is used to enable the user to perform common control operations, enabling rapid setup of the augmented reality device 500. Since the setup program for the augmented reality device 500 includes many items, all commonly used setup options are typically not displayed in their entirety in the status bar. To this end, in some embodiments, an expansion option may also be provided in the status bar. After the expansion option is selected, an expansion window may be presented in the current interface, and a plurality of setting options may be further set in the expansion window for implementing other functions of the augmented reality device 500.
For example, in some embodiments, after the expansion option is selected, a "quick center" option may be set in the expansion window. After the user clicks the shortcut center option, the augmented reality device 500 may display a shortcut center window. The shortcut center window may include "screen capture", "screen recording", and "screen projection" options for waking up corresponding functions, respectively.
The service class extension area 2 supports extension classes configuring different classes. And if the new service type exists, supporting the configuration of an independent TAB and displaying the corresponding page content. The expanded classification in the service classification expanded area 2 can also perform sequencing adjustment and offline service operation on the expanded classification. In some embodiments, the service class extension area 2 may include the content of: movie & TV, education, tourism, application, my. In some embodiments, the business category extension area 2 is configured to expose a large business category TAB and support more categories for configuration, which is illustrated in support of configuration, as shown in fig. 5.
The application shortcut operation entry area 3 can specify that pre-installed applications are displayed in front for operation recommendation, and support to configure a special icon style to replace a default icon, wherein the pre-installed applications can be specified in a plurality. In some embodiments, the application shortcut operation entry area 3 further includes a left-hand movement control and a right-hand movement control for moving the option target, for selecting different icons, as shown in fig. 7.
The suspended matter region 4 may be configured above the left oblique side or above the right oblique side of the fixed region, may be configured as an alternative character, or is configured as a jump link. For example, the flotage jumps to a certain application or displays a designated function page after receiving the confirmation operation, as shown in fig. 8. In some embodiments, the suspension may not be configured with jump links, and is used solely for image presentation.
In some embodiments, the global UI further comprises a status bar at the top for displaying time, network connection status, power status, and more shortcut entries. After the handle of the AR/VR terminal is used, namely the icon is selected by the handheld controller, the icon displays a character prompt comprising left and right expansion, and the selected icon is stretched and expanded left and right according to the position.
For example, after the search icon is selected, the search icon displays the characters including "search" and the original icon, and after the icon or the characters are further clicked, the search icon jumps to a search page; for another example, clicking the favorite icon jumps to the favorite TAB, clicking the history icon default location display history page, clicking the search icon jumps to the global search page, clicking the message icon jumps to the message page.
In some embodiments, the interaction may be performed through a peripheral, e.g., a handle of the AR/VR terminal may operate a user interface of the AR/VR terminal, including a return button; a main page key, and the long press of the main page key can realize the reset function; volume up-down buttons; and the touch area can realize the functions of clicking, sliding, pressing and holding a focus and dragging.
In some embodiments, an "AR/VR" mode switching option may be further provided on the user interface, and the augmented reality device 500 may switch between the VR mode and the AR mode when the user clicks the switching option. For the VR mode, the augmented reality device 500 does not start an image capture device, and renders the specified media asset data only by rendering a scene to form a virtual reality picture; for the AR mode, the augmented reality device 500 needs to start an image capture device to capture images in real time to obtain real scene images, and adds virtual objects in the rendered scene to output virtual object images, so as to form AR images after fusion.
In some embodiments, in order to facilitate the user to view the AR picture, the AR device may perform image recognition on the AR picture, label the recognized object, and finally display the object in the fused AR picture. For example, the AR device may identify the device type in the real scene according to the appearance shape, such as identifying the flat panel display device as a tv, and track and mark the tv image through the anchor point to indicate to the user that the image is a tv.
And when the object identification result is displayed, the labeled content can be further enriched according to the identification result. For example, after marking a television in a real scene picture, the AR device may further identify information such as a specific model and device parameters of the television. All of these pieces of information can be displayed on the AR screen together with the contents such as the television name. For convenience of description, in the embodiments of the present application, the identified device type and the additional description content are both referred to as identification object information.
The identification object information may be displayed in the AR screen in a specific UI layout form. The final display effect of the identification object information is different under different AR devices or different use scenes. For example, for a shopping scene, the final display effect presented should describe the object features as detailed as possible, and the identification object information may be composed by a guide line and a parameter list. For the navigation scene, the final display effect presented should not obscure the real scene viewed by the user as much as possible, and the identification object information can be presented in the form of points and character names so that the user can accurately distinguish the real scene.
As shown in fig. 9, when the real scene picture includes a display device such as a television, since the image can be displayed on the screen of the display device, the image displayed on the display device can also be displayed in the merged AR picture, forming a "picture-in-picture" effect. The AR device may also perform image recognition on the image presented in the display device to determine recognition object information. For example, when a certain oil painting work is displayed in the display device, the AR device may identify the displayed oil painting work, and identify specific content of the oil painting work, for example, a scene of the oil painting is a banquet room. And then the information is identified by the anchor point in cooperation with the bubble, that is, the anchor point is located at a point in the oil painting displayed on the screen, the bubble is located in the upper right corner area of the anchor point, and the bubble includes the characters 'oil painting banquet room', so that the marking of the content displayed in the display device is completed, as shown in fig. 10. For convenience of description, in the embodiments of the present application, an image presented in a screen of a display device is referred to as a target image.
In the process of object identification, the AR device may first construct an image identification model based on an AI image classification algorithm. And then acquiring the fused picture image, and inputting the acquired image into an image recognition model. The image recognition model can calculate the classification probability of the input image according to an AI image classification algorithm, namely, an object in the image can be recognized.
Since the image recognition model is a classification model obtained by training a large number of sample images, the accuracy of the classification result of the image recognition model is affected by the sample images. When the scenes of the image to be classified and the sample image are similar, the classification result is more accurate; and when the scene difference between the image to be classified and the sample image is large, the classification result is inaccurate.
For the AR picture including the target image, because the AR picture is affected by the user posture, the target image finally presented in the fusion picture may be presented in a plurality of uncertain postures, such as front view, oblique view, tilt, flip, and the like, such AR picture image obviously has a larger difference from the sample image of the image recognition model, and therefore, after the AR picture image is input into the image recognition model, the accuracy of the obtained recognition result will be greatly reduced.
In order to improve the accuracy of object recognition, some embodiments of the present application provide an augmented reality device, including: display, image acquisition device, communicator and controller. Wherein the display is configured to be a user interface and to display a fusion picture including a real scene picture and a virtual object picture. The image acquisition device is configured to acquire a real scene picture. The communicator is configured to connect to a display device.
As shown in fig. 11, the controller of the augmented reality device may be configured to execute an interactive augmentation method for improving the accuracy of object recognition, which specifically includes the following:
a control instruction for displaying an augmented reality screen input by a user is received, and a controller of the augmented reality device 500 may receive a variety of control instructions input by the user during use of the user. Each control instruction corresponds to a control function, and part of the control instructions can be used for displaying an augmented reality picture. The user can input a control instruction for displaying the augmented reality picture under different use environments, for example, the control instruction for displaying the augmented reality picture can be input by opening some application program with an AR function; the control instructions may also be entered by opening a virtual model file in augmented reality device 500.
The specific interaction action may be presented in different action forms according to different interaction modes supported by the augmented reality device 500. For example, for an AR device operated by an external handle, a user may move a focus cursor in an application program interface through handle control, and click an "OK/OK" key on the handle when the focus cursor moves to an AR application position, so as to start an AR application. At this time, the controller may receive a control instruction for displaying the augmented reality picture, which is input by the user.
For part of the augmented reality device 500, an intelligent voice system may be further built in or externally connected, so that the user may also input a control instruction for displaying an augmented reality picture through the intelligent voice system. For example, the user may control the augmented reality device 500 to start an AR application or open an AR resource file by inputting voice contents such as "open x AR application", "i want to see AR", and at this time, the controller may also receive a control instruction for displaying an augmented reality screen.
After acquiring the control instruction for displaying the AR picture, the AR device may extract the target image in the augmented reality picture in response to the control instruction. Since the target image is an image displayed on the screen of the display device 200, the AR device may acquire the target image from the display device 200.
For this reason, after acquiring a control instruction for displaying an AR screen, the augmented reality device 500 may transmit an image acquisition request to the display device 200 through the communicator. After receiving the image acquisition request, the display device 200 may send the currently displayed image, that is, the target image, to the augmented reality device 500, so that the augmented reality device 500 may acquire the target image.
In order to transmit the request instruction and the target image, the display device 200 needs to establish a communication connection relationship with the augmented reality device 500 in a specific form. The communication connection relationship may be based on wired communication or wireless communication, for example, for wired communication, the communicator of the augmented reality device 500 may be a USB interface, and the augmented reality device 500 may connect to the display device 200 through a USB data line and establish a communication connection relationship based on a USB transmission protocol. For wireless communication, the communicator can be internally provided with communication function modules such as Bluetooth, infrared and WiFi networks, and establishes a wireless connection relationship through a corresponding transmission protocol.
In the process of acquiring the target image, the request instruction and the target image data may be based on the same communication connection mode, or may be based on different communication connection modes. For example, the augmented reality device 500 may transmit an image acquisition request to the display device 200 through a bluetooth connection, but since the bluetooth connection has low transmission efficiency, the display device 200 transmits a target image to the augmented reality device 500 through a WiFi network after receiving the image acquisition request.
In some embodiments, when the image displayed by the display apparatus 200 is not a local image but a network asset, the display apparatus 200 may also not directly send a source file of the target image to the augmented reality apparatus 500, but after receiving an image acquisition request of the augmented reality apparatus 500, send a Uniform Resource Locator (URL) address corresponding to the target image to the augmented reality apparatus 500, and the augmented reality apparatus 500 acquires the target image by accessing the URL address.
It should be noted that, when acquiring the target image, the communication connection relationship between the augmented reality device 500 and the display device 200 may also be detected. When data interaction can be performed between the augmented reality device 500 and the display device 200, an image acquisition request can be sent to the display device 200 through the communicator; when data interaction cannot be performed between the augmented reality device 500 and the display device 200, connection interfaces may be displayed in the user interfaces of the display device 200 and the augmented reality device 500, respectively, for establishing a communication connection relationship.
In some embodiments, if the user does not control the augmented reality device 500 to establish the communication connection relationship with the display device 200 within the set time after the connection interface is displayed, the augmented reality device 500 may further perform an intercepting operation on the AR picture, and intercept content corresponding to the screen of the display device 200 in the AR picture, thereby obtaining the target image.
After acquiring the target image, the augmented reality device 500 may further detect whether the target image includes electronic watermark information. The electronic watermark information may be a part which is not important to the user's feeling, that is, a so-called redundant part, in image data and sound data, and may be mixed in the data as a noise unrelated to the text, as signature or verification information.
For example, in order to reduce the influence of the electronic watermark on the user sensory experience, the frequency domain features may be added to the frequency domain map after the target image is converted into the frequency domain map. The frequency domain features can be combined to form text information according to a specific coding mode, and the text information is used for representing signature or checking information content. Since the frequency domain features are modifications made to the frequency domain map, the modified contents can be used as noise data, and therefore, the modified contents are not significantly reflected in the specific display contents of the target image, i.e. the viewing effect of the user is not affected.
In order to detect the electronic watermark information in the target image, a detection module for detecting the electronic watermark information can be further built in the AR device. After acquiring the target image, the detection module may perform watermark detection on the target image to acquire specific content in the electronic watermark. That is, as shown in fig. 12, in some embodiments, after the AR device acquires the target image, the detection module may convert the target image into a frequency domain image, and extract frequency domain feature data from the frequency domain image, so as to read the content in the electronic watermark according to the frequency domain feature data.
It should be noted that the electronic watermark information may also be mixed in the target image in other manners. For example, information related to the content of the target image may be edited as electronic watermark information in the file description of the target image file. Therefore, after acquiring the target image, the AR device may also read a file description corresponding to the target image to obtain the electronic watermark information.
In order to present the identification object information corresponding to the target image, as shown in fig. 13, the AR device may determine the identification object information to be displayed according to whether the target image contains the electronic watermark information. And if the target image contains the electronic watermark information, generating identification object information according to the electronic watermark information. For example, when the target image includes electronic watermark information describing that the current image content is a "oil banquet room", the AR device may directly extract the text content in the electronic watermark information and display the text content as identification object information in the user interface.
As can be seen, for the target image containing the electronic watermark information, the AR device may directly obtain the object identification result by reading the electronic watermark information, and it is not necessary to perform the AI image classification algorithm on the AR picture, so that the influence of the display brightness of the display device 200 on the identification result may be avoided, and the accuracy of identifying the object information is improved.
And if the target image does not contain the electronic watermark information, an AI picture classification algorithm can be performed on the target image to obtain the identification object information. That is, the target image may be input to the image recognition model to obtain the classification probability of the target image through the recognition model, and the recognition object information may be obtained.
Compared with the method for executing the AI picture classification algorithm for the AR picture image, the embodiment is to execute the AI picture classification algorithm for the target image, and because the target image is the original file showing the image picture in the display device 200 or the screenshot file with the same content as the original file, the target image is not affected by the display process such as picture brightness and image quality processing, and the like, and the image deformation such as deflection caused by the AR gesture is also reduced, so that the form of the image input into the image recognition model is similar to that of the sample image, and the accuracy of the recognition object information can be improved as well.
For example, when the display device 200 displays a canvas image whose content is a party room on the screen, the acquired target image is the canvas image, and therefore, when the canvas image is input to the image recognition model, the image recognition model is an AI picture classification algorithm performed on the canvas image, and therefore, the influence of brightness on the recognition result can be alleviated. And if an AI picture classification algorithm is performed on the AR picture image, the image of the input image recognition model includes images in the current real scene, such as a television shell, tables, chairs, and the like, and includes an oil picture image. It is obvious that the oil painting image is only a small part of the AR painting image, and there is interference of factors such as brightness and image quality processing, which reduces the accuracy of the identification target information.
By the above manner of acquiring the identification object information, after the identification object information is acquired, the augmented reality device 500 may control the display to display the identification object information, that is, add the display content containing the identification object information in the user interface. For example, when the identification object information is obtained that the current target image type is oil painting and the target image content is a party room, the text "oil painting party room" may be displayed at a position corresponding to the screen of the display device 200 in the user interface to prompt the specific content currently displayed by the display device 200 in the AR screen.
In addition, the recognition target information may be displayed in combination with the target information recognized from the entire AR screen image. For example, by performing item recognition on an AR picture image, it can be determined that the device type of the display device 200 included in the current picture is a television. In combination with the identification object information of the target image, prompt contents such as "television, oil painting on the picture being demonstrated" and the like can be presented finally to obtain more accurate identification object information, as shown in fig. 14.
According to the interaction enhancing method provided by the above embodiment, the AR device may obtain the recognition object information through the target image when the display content of the display device 200 is included in the AR screen. Obviously, when display content of the display apparatus 200 is not included in the AR picture, the augmented reality apparatus 500 cannot obtain recognition object information through the target image.
Therefore, as shown in fig. 15, in some embodiments, after the step of receiving the control instruction for displaying the augmented reality picture, the augmented reality device 500 may acquire a key frame image corresponding to the augmented reality picture, and detect a brightness change gradient of the key frame image, so as to determine whether the display device 200 is included in the current AR picture according to the brightness change gradient.
If the brightness change gradient is smaller than the gradient threshold, it indicates that the display device 200 is not included in the current AR picture, or the display device 200 does not display an image, or the display brightness does not affect the accuracy of the identification object information, so an AI picture classification algorithm may be performed on the key frame image to obtain the identification object information, so that the identification object information may be displayed in the AR picture.
If the brightness change gradient is greater than or equal to the gradient threshold, it is indicated that the display device 200 is included in the current AR picture, or the display brightness can affect the accuracy of the identification object information. Therefore, an image acquisition request may be sent to the display device through the communicator to acquire a target image in an augmented reality picture, and recognition object information may be obtained through the target image in the manner provided in the above embodiments to improve the accuracy of the recognition object information.
Since the augmented reality device 500 may directly acquire the identification object information through the electronic watermark information when the target image contains the electronic watermark information, the electronic watermark information may be further configured in a specific text format in order to further improve the accuracy of the identification object information. That is, in some embodiments, the electronic watermark information includes a validity period, a display location, and a tag name. The validity period is a time set by an operator according to AR display requirements, and is used for enabling the electronic watermark information of the target image to be read within the set time, and the electronic watermark information is invalid when the time exceeds the set time, so that the real-time performance and the validity of the electronic watermark information are guaranteed. The display position is used to mark a reference point position of the recognition target information when displayed, and may be set as a fixed point in the target image according to an actual application scene of the target image. The tag name is a word for representing information of an actual recognition object, and may be a keyword, a descriptive sentence, a line paragraph, or the like.
In order to mix the electronic watermark information in the target image and reduce the perceptible interference of the electronic watermark information on the target image file after mixing, the content of the text contained in the electronic watermark information should be as small as possible, so the data in the electronic watermark information can be represented according to a specific coding mode, for example, the electronic watermark information is read to be "J01200604A", wherein "J0120" represents the validity period as long as 1 month and 20 days of 2023, and "06" represents the second area displayed in the visual field, and "04A" represents the object name code, namely the object name of "oil painting banquet room". Therefore, a database used for analyzing the electronic watermark information can be stored in the AR device, various comparison tables can be cached in the database, and after the electronic watermark information is obtained, the database can be called to carry out matching so as to obtain the identification object information.
Obviously, when the frequency domain features are extracted from the frequency domain graph corresponding to the target image, the frequency domain features may be matched with the text content according to an agreed encoding rule, so as to determine the specific content of the electronic watermark information, and therefore, the corresponding relationship between the frequency domain features and the text content may be stored in the database. Namely, after the frequency domain features are extracted from the frequency domain image of the target image, a local database can be called, and electronic watermark information matched with the frequency domain feature data can be inquired in the local database.
Because the storage space of the AR device is limited, and the AR device is not beneficial to maintaining a database with a large scale, the AR device can be in communication connection with the cloud server through the communicator. As shown in fig. 16, if the electronic watermark information matching the frequency domain feature data is not queried in the local data, a query request is generated and sent to the cloud server. The cloud server can query in a database of the cloud server by taking the frequency domain characteristic data as an index according to the query request after receiving the query request. And the cloud server feeds back the query result to the AR device so that the AR device can obtain the electronic watermark information.
In order to improve the query efficiency, the data can be classified in the database of the cloud server, and the classification standard can be in accordance with the specific application field of the AR device. When the AR device starts any AR application program, the classification of the application program can be determined, so that the classification information can be attached when the query request is sent to the cloud server, and the cloud server can query according to the classification. Therefore, the electronic watermark information is matched through the cloud server, so that a large-scale database can be maintained conveniently, the occupation of the database on the storage space of the AR equipment is reduced, the data matching amount can be reduced, and the accuracy of the matching result is improved.
After the electronic watermark information is obtained through matching, the AR device may also read the contents such as the expiration date, the display position, and the tag name in the electronic watermark information, and generate identification object information according to the read contents. That is, as shown in fig. 17, in some embodiments, in the step of generating the identification object information based on the electronic watermark information, the AR device may first extract the valid period from the electronic watermark information and acquire the current time, and determine whether the electronic watermark information of the current target image has failed by comparing the valid period with the current time.
If the current time does not exceed the validity period, the current electronic watermark information is not invalidated, and therefore the display position and the tag name can be extracted from the electronic watermark information to generate the identification object information. If the current time exceeds the validity period, the current electronic watermark information is invalid, so that an AI picture classification algorithm needs to be performed on the target image to obtain the identification object information.
For the case where the current time does not exceed the validity period, the AR device may further extract the display position and the tag name from the electronic watermark information, thereby controlling the display to display the identification object information according to the display position and the tag name. That is, as shown in fig. 18, in some embodiments, the AR device may query the identification object information and the UI template applicable to the identification object information according to the tag name. And applying a UI template to the identification object information to generate a to-be-displayed picture containing the identification object information, and finally adding the to-be-displayed picture to a user interface according to the display position.
For example, the operator of the AR application and display device 200 may set the validity period of the target image to 2023 year 1 month 20 days according to the operation and maintenance requirement, that is, the identification object information may be obtained as "oil painting banquet room" by the electronic watermark information before 2023 year 1 month 20 days. When the AR device detects the frequency domain map at 20 days 1 month 2021, because the current time does not exceed the validity period, the obtained electronic watermark information is in a usable state, so that the tag name in the electronic watermark information can be further extracted, that is, matching is performed in the database, the character corresponding to the tag name is obtained as a "oil painting banquet room", and the display position in the electronic watermark information is read at the same time, that is, the region whose display position is the upper right corner of the oil painting is obtained. Therefore, the AR device may fuse the tag name in the bubble dialog box according to the UI template and display the top right corner position of the oil painting image in the user interface.
Since the AR device is a head-mounted device, and can change the display content in real time according to the wearing posture of the user, in order to obtain a better indication effect, in some embodiments, after the identification object information is displayed, the AR device may acquire the device appearance information of the display device 200. Obviously, different display devices 200 may have different device appearances, for example, a display device 200 such as a television set, whose device appearance is a rectangular flat plate structure having a width larger than a height; and, for the display device 200 such as a signboard, the device appearance is a rectangular flat plate structure having a width smaller than a height. After the AR device obtains the identification object information, the device type of the current display device 200 may be obtained through the connection state of the communicator, thereby obtaining device appearance information.
After obtaining the device appearance information, the AR device may further mark at least one device feature point in the AR picture according to an appearance shape of the device. The device feature point may be any point in the area where the display device 200 is located, which is determined after the edge extraction. In order to enable efficient labeling of the location of the display device 200, the device feature point may be a specific point in the appearance shape of the display device 200, such as a vertex, a center point, or the like of a rectangular structure.
After marking at least one device feature point, the AR device also needs to display the identification object information with the device feature point as a reference. For example, the AR device stretches out a guide line indicating the content displayed in the display device 200, starting from one device feature point. When a plurality of device feature points are marked in the AR screen, the display reference point of the identification object information may also be switched with the conversion of the AR angle of view so that the identification object information is always displayed at an appropriate position.
As shown in fig. 19, in some embodiments, for a target image that does not include electronic watermark information, after the AR device identifies object-related information by using an AI image classification algorithm, the AR device may further generate the electronic watermark information according to the identification result, and feed the electronic watermark information back to the display device 200 or to the cloud server, so that the identification object information may be obtained directly through the generated electronic watermark information in a subsequent AR image display process, or when another AR device displays the target image.
To this end, the augmented reality device 500 may generate electronic watermark information from the recognition object information. Similarly, the electronic watermark information can be converted into a character string form according to a specific coding mode. For example, when the identification object information corresponding to the target image is "oil painting banquet room" by the AI image classification algorithm, the character corresponding to the tag name may be "04A" according to the comparison table, and then the display position character may be generated according to the display position of the identification information in the current AR picture. The expiration date may be set based on the set expiration time. If the validity time may be 1 year, the AR device may calculate the time after one year as the validity period from the current time.
After the electronic watermark information is generated, the AR device can store the electronic watermark information in a database stored locally, and can also send the electronic watermark information and the target image to the cloud server for storage.
The AR device may also add electronic watermark information to the target image to generate an updated image. For example, after the target image is converted into the frequency domain image, the frequency domain features may be added to the frequency domain image according to the electronic watermark information, so as to convert the target image into an updated image containing the electronic watermark information. Finally, the updated image is sent to the display device 200 so that the display device 200 presents and/or stores the updated image. It can be seen that when the display device 200 displays the updated image, the updated image in the subsequent AR scene may be used as a target image in the subsequent detection process, and since the updated image includes the electronic watermark information, the AR device may directly obtain the identification object information through the electronic watermark information, thereby greatly reducing the data processing amount in the subsequent identification process and improving the operation efficiency of the AR device.
Based on the augmented reality device 500, an interactive augmentation method is further provided in some embodiments of the present application, including:
receiving a control instruction which is input by a user and used for displaying an augmented reality picture, wherein the fusion picture comprises a real scene picture and a virtual object picture;
responding to the control instruction, and acquiring a target image in the augmented reality picture, wherein the target image is an image displayed on a screen of the display equipment;
if the target image contains electronic watermark information, generating identification object information according to the electronic watermark information;
and controlling the display to display the identification object information.
According to the technical scheme, the interaction enhancement method provided by the embodiment can acquire the target image in the fusion picture, namely the image currently displayed by the display device, after the user inputs the control instruction for displaying the fusion picture. And then, performing electronic watermark detection on the target image, and when the target image contains electronic watermark information, directly generating identification object information through the electronic watermark information so as to display the identification object information in a UI (user interface). The interaction enhancement method can acquire part of identification object information through the electronic watermark information, so that the influence of the image brightness, the image quality processing and the like of the display equipment on the identification process can be relieved, and the identification accuracy is improved.
The embodiments provided in the present application are only a few examples of the general concept of the present application, and do not limit the scope of the present application. Any other embodiments extended according to the scheme of the present application without inventive efforts will be within the scope of protection of the present application for a person skilled in the art.

Claims (10)

1. An augmented reality device, comprising:
a display;
the image acquisition device is configured to acquire a real scene picture;
a communicator configured to connect a display device;
a controller configured to:
receiving a control instruction which is input by a user and used for displaying an augmented reality picture;
responding to the control instruction, and acquiring a target image in the augmented reality picture, wherein the target image is an image displayed on a screen of the display equipment;
if the target image contains electronic watermark information, generating identification object information according to the electronic watermark information;
and controlling the display to display the identification object information.
2. The augmented reality device of claim 1, wherein after the step of acquiring the target image in the augmented reality screen, the controller is further configured to:
and if the target image does not contain the electronic watermark information, executing an AI picture classification algorithm on the target image to obtain the identification object information.
3. The augmented reality device of claim 2, wherein after obtaining the identifying object information, the controller is further configured to:
generating electronic watermark information according to the identification object information;
adding the electronic watermark information to the target image to generate an updated image;
and sending the updated image to the display device so as to enable the display device to display and/or store the updated image.
4. The augmented reality device of claim 1, wherein after the step of receiving a control instruction for displaying an augmented reality screen input by a user, the controller is further configured to:
acquiring a key frame image corresponding to the fusion picture;
detecting a brightness change gradient of the key frame image;
if the brightness change gradient is larger than or equal to a gradient threshold value, sending an image acquisition request to the display equipment through the communicator to acquire a target image in the augmented reality picture;
and if the brightness change gradient is smaller than a gradient threshold value, executing an AI picture classification algorithm on the key frame image to obtain the identification object information.
5. The augmented reality device of claim 1, wherein the electronic watermark information includes a validity period, a display location, and a tag name; in the step of generating identification object information from the electronic watermark information, the controller is further configured to:
extracting the valid period from the electronic watermark information and acquiring the current time;
comparing the valid period with the current time;
if the current time does not exceed the valid period, extracting the display position and the label name from the electronic watermark information to generate identification object information;
and if the current time exceeds the valid period, executing an AI image classification algorithm on the target image to obtain the identification object information.
6. The augmented reality device of claim 5, wherein in the step of controlling the display to display the identification object information, the controller is further configured to:
inquiring the identification object information and a UI template applicable to the identification object information according to the label name;
applying the UI template to the identification object information to generate a to-be-displayed picture containing the identification object information;
and adding the picture to be displayed to the user interface according to the display position.
7. The augmented reality device of claim 1, wherein in the step of acquiring the target image in the augmented reality screen, the controller is further configured to:
converting the target image into a frequency domain graph;
extracting frequency domain feature data from the frequency domain map;
calling a local database;
and inquiring the electronic watermark information matched with the frequency domain characteristic data in the local database.
8. The augmented reality device of claim 7, wherein the communicator is further configured to establish a communication connection with a cloud server, and in the step of querying the local database for the electronic watermark information matching the frequency domain feature data, the controller is further configured to:
if the electronic watermark information matched with the frequency domain characteristic data is not inquired in the local data, an inquiry request is generated;
sending the query request to a cloud server;
and receiving the electronic watermark information fed back by the cloud server according to the query request.
9. The augmented reality device of claim 1, wherein in the step of controlling the display to display the identification object information, the controller is further configured to:
acquiring equipment appearance information of the display equipment;
marking at least one device feature point in the augmented reality picture according to the device appearance information;
and displaying the identification object information by taking the equipment characteristic point as a reference.
10. The interactive augmentation method is applied to augmented reality equipment, wherein the augmented reality equipment comprises a display, an image acquisition device, a communicator and a controller; the augmented reality device is connected with the display device through the communicator, and the interaction augmentation method comprises the following steps:
receiving a control instruction which is input by a user and used for displaying an augmented reality picture;
responding to the control instruction, and acquiring a target image in the augmented reality picture, wherein the target image is an image displayed on a screen of the display equipment;
if the target image contains electronic watermark information, generating identification object information according to the electronic watermark information;
and controlling the display to display the identification object information.
CN202110535801.8A 2021-05-17 2021-05-17 Augmented reality equipment and interaction enhancement method Pending CN114363705A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110535801.8A CN114363705A (en) 2021-05-17 2021-05-17 Augmented reality equipment and interaction enhancement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110535801.8A CN114363705A (en) 2021-05-17 2021-05-17 Augmented reality equipment and interaction enhancement method

Publications (1)

Publication Number Publication Date
CN114363705A true CN114363705A (en) 2022-04-15

Family

ID=81095364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110535801.8A Pending CN114363705A (en) 2021-05-17 2021-05-17 Augmented reality equipment and interaction enhancement method

Country Status (1)

Country Link
CN (1) CN114363705A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013138846A1 (en) * 2012-03-22 2013-09-26 Silverbrook Research Pty Ltd Method and system of interacting with content disposed on substrates
WO2013149267A2 (en) * 2012-03-29 2013-10-03 Digimarc Corporation Image-related methods and arrangements
CN104317541A (en) * 2014-09-30 2015-01-28 广州三星通信技术研究有限公司 Method and Equipment for displaying remark information of pictures in terminal
US20170201808A1 (en) * 2016-01-09 2017-07-13 Globalive Xmg Jv Inc. System and method of broadcast ar layer
CN107016550A (en) * 2017-02-21 2017-08-04 阿里巴巴集团控股有限公司 The distribution method and device of virtual objects under augmented reality scene
CN107391060A (en) * 2017-04-21 2017-11-24 阿里巴巴集团控股有限公司 Method for displaying image, device, system and equipment, computer-readable recording medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013138846A1 (en) * 2012-03-22 2013-09-26 Silverbrook Research Pty Ltd Method and system of interacting with content disposed on substrates
WO2013149267A2 (en) * 2012-03-29 2013-10-03 Digimarc Corporation Image-related methods and arrangements
CN104317541A (en) * 2014-09-30 2015-01-28 广州三星通信技术研究有限公司 Method and Equipment for displaying remark information of pictures in terminal
US20170201808A1 (en) * 2016-01-09 2017-07-13 Globalive Xmg Jv Inc. System and method of broadcast ar layer
CN107016550A (en) * 2017-02-21 2017-08-04 阿里巴巴集团控股有限公司 The distribution method and device of virtual objects under augmented reality scene
CN107391060A (en) * 2017-04-21 2017-11-24 阿里巴巴集团控股有限公司 Method for displaying image, device, system and equipment, computer-readable recording medium

Similar Documents

Publication Publication Date Title
US20190333478A1 (en) Adaptive fiducials for image match recognition and tracking
US11706485B2 (en) Display device and content recommendation method
CN106648098B (en) AR projection method and system for user-defined scene
CN113064684B (en) Virtual reality equipment and VR scene screen capturing method
WO2013023705A1 (en) Methods and systems for enabling creation of augmented reality content
CN110809187B (en) Video selection method, video selection device, storage medium and electronic equipment
CN113542624A (en) Method and device for generating commodity object explanation video
CN112732089A (en) Virtual reality equipment and quick interaction method
CN113066189B (en) Augmented reality equipment and virtual and real object shielding display method
CN114302221B (en) Virtual reality equipment and screen-throwing media asset playing method
WO2022193931A1 (en) Virtual reality device and media resource playback method
WO2022151882A1 (en) Virtual reality device
CN115129280A (en) Virtual reality equipment and screen-casting media asset playing method
CN114363705A (en) Augmented reality equipment and interaction enhancement method
CN112905007A (en) Virtual reality equipment and voice-assisted interaction method
WO2022111005A1 (en) Virtual reality (vr) device and vr scenario image recognition method
US20230334791A1 (en) Interactive reality computing experience using multi-layer projections to create an illusion of depth
US20230334790A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
CN112732088B (en) Virtual reality equipment and monocular screen capturing method
US20230334792A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
CN116055708B (en) Perception visual interactive spherical screen three-dimensional imaging method and system
US20230326161A1 (en) Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN116935084A (en) Virtual reality equipment and data verification method
WO2023215637A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
WO2024039885A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination