CN109636922B - Method and device for presenting augmented reality content - Google Patents

Method and device for presenting augmented reality content Download PDF

Info

Publication number
CN109636922B
CN109636922B CN201811542730.9A CN201811542730A CN109636922B CN 109636922 B CN109636922 B CN 109636922B CN 201811542730 A CN201811542730 A CN 201811542730A CN 109636922 B CN109636922 B CN 109636922B
Authority
CN
China
Prior art keywords
augmented reality
reality content
preset
image
preset feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811542730.9A
Other languages
Chinese (zh)
Other versions
CN109636922A (en
Inventor
宋志清
孙红亮
徐闻达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hiscene Information Technology Co Ltd
Original Assignee
Hiscene Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hiscene Information Technology Co Ltd filed Critical Hiscene Information Technology Co Ltd
Publication of CN109636922A publication Critical patent/CN109636922A/en
Application granted granted Critical
Publication of CN109636922B publication Critical patent/CN109636922B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The purpose of the application is to provide a method and a device for presenting augmented reality content. The method comprises the steps that first user equipment sends a preset characteristic image and corresponding augmented reality content to network equipment; the network equipment obtains corresponding preset characteristic information; the second user equipment sends corresponding target feature information; the network device sends the augmented reality content to the second user device; and superposing and presenting the augmented reality content. The present application can greatly reduce the cost and time required to develop augmented reality content.

Description

Method and device for presenting augmented reality content
The present application claims priority to CN 2018109879652 (a method and apparatus for presenting augmented reality content).
Technical Field
The present application relates to the field of computers, and more particularly to a technique for presenting augmented reality content.
Background
Augmented reality (Augmented Reality, AR) is a sub-field of natural picture recognition technology, emphasizing virtual-real fusion of natural human-machine visual interactions. The method is a new technology for integrating real world information and virtual world information in a seamless mode, real environments and virtual objects are overlapped on the same picture or space in real time and exist at the same time, and therefore sense experience exceeding reality is achieved.
Specialized augmented reality development techniques, such as multimedia, three-dimensional modeling, real-time video display and control, multi-sensor fusion, real-time tracking and registration, scene fusion, etc., are generally required for making augmented reality content. And common enterprises do not have the professional development technology and can not automatically manufacture the augmented reality content according to the needs, so that the interests of the enterprises in the augmented reality technology are reduced.
Disclosure of Invention
It is an object of the present application to provide a method for presenting augmented reality content.
According to one aspect of the present application, there is provided a method for presenting augmented reality content at a first user device side, the method comprising the steps of:
and sending the preset feature image and the augmented reality content corresponding to the preset feature image to network equipment, wherein the augmented reality content is used for being overlapped and presented by other user equipment.
According to another aspect of the present application, there is provided a method for presenting augmented reality content at a network device side, the method comprising the steps of:
receiving a preset feature image sent by first user equipment and augmented reality content corresponding to the preset feature image;
extracting the characteristics of the preset characteristic images to obtain corresponding preset characteristic information; and
And matching the preset characteristic information based on the target characteristic information extracted from the target image, and if the matching is successful, sending the augmented reality content to the second user equipment.
According to another aspect of the present application, there is provided a method for presenting augmented reality content at a second user equipment side, the method comprising the steps of:
extracting features based on the target image to obtain corresponding target feature information;
sending an augmented reality content request to a network device, wherein the augmented reality content request comprises the target feature information, and receiving the augmented reality content returned by the network device; and
and determining pose information of the second user equipment based on the target characteristic information, and superposing and presenting the augmented reality content based on the pose information.
According to another aspect of the present application, there is provided a method for presenting augmented reality content at a second user equipment side, the method comprising the steps of:
sending an augmented reality content request to a network device, and receiving the augmented reality content returned by the network device and pose information about the second user device, wherein the augmented reality content request comprises a target image; and
And superposing and presenting the augmented reality content based on the pose information.
According to one aspect of the present application, there is provided a method for presenting augmented reality content at a first user device side, the method comprising the steps of:
transmitting augmented reality content to a network device, the augmented reality content comprising at least one target video; and
and receiving a preset feature image returned by the network equipment, wherein the preset feature image comprises a feature frame in the target video.
According to another aspect of the present application, there is provided a method for presenting augmented reality content at a network device side, the method comprising the steps of:
receiving augmented reality content transmitted by first user equipment, wherein the augmented reality content comprises at least one target video;
determining a preset feature image corresponding to the augmented reality content based on a feature frame in the target video, wherein the preset feature image comprises the feature frame; and
and matching the preset characteristic information of the preset characteristic image based on the target characteristic information extracted from the target image, and if the matching is successful, sending the augmented reality content to the second user equipment.
According to one aspect of the present application, there is provided a first user device for presenting augmented reality content, the first user device comprising:
the content sending module is used for sending the preset feature image and the augmented reality content corresponding to the preset feature image to the network equipment, wherein the augmented reality content is used for being overlapped and presented by other user equipment.
According to another aspect of the present application, there is provided a network device for presenting augmented reality content, the network device comprising:
the content receiving module is used for receiving a preset characteristic image sent by the first user equipment and augmented reality content corresponding to the preset characteristic image;
the feature extraction module is used for carrying out feature extraction on the preset feature image so as to obtain corresponding preset feature information; and
and the content matching module is used for matching the preset characteristic information based on the target characteristic information extracted from the target image, and if the matching is successful, the augmented reality content is sent to the second user equipment.
According to another aspect of the present application, there is provided a second user device for presenting augmented reality content, the second user device comprising:
the target feature extraction module is used for carrying out feature extraction based on the target image so as to obtain corresponding target feature information;
The content acquisition module is used for sending the target characteristic information to the network equipment and receiving the augmented reality content returned by the network equipment; and
and the content presentation module is used for determining pose information of the second user equipment based on the target characteristic information and presenting the augmented reality content in a superposition mode based on the pose information.
According to another aspect of the present application, there is provided a second user device for presenting augmented reality content, the second user device comprising:
the content acquisition module is used for sending an augmented reality content request to the network equipment, and receiving the augmented reality content returned by the network equipment and pose information about the second user equipment, wherein the augmented reality content request comprises a target image; and
and the content presentation module is used for superposing and presenting the augmented reality content based on the pose information.
According to one aspect of the present application, there is provided a first user device for presenting augmented reality content, the first user device comprising:
a content sending module for sending augmented reality content to a network device, the augmented reality content comprising at least one target video;
The preset image receiving module is used for receiving a preset feature image returned by the network equipment, wherein the preset feature image comprises a feature frame in the target video.
According to another aspect of the present application, there is provided a network device for presenting augmented reality content, the network device comprising:
the content receiving module is used for receiving the augmented reality content sent by the first user equipment, wherein the augmented reality content comprises at least one target video;
the feature extraction module is used for determining a preset feature image corresponding to the augmented reality content based on a feature frame in the target video and extracting target feature information based on the preset feature image, wherein the preset feature image comprises the feature frame; and
and the content matching module is used for matching the preset characteristic information of the preset characteristic image based on the target characteristic information, and if the matching is successful, the augmented reality content is sent to the second user equipment.
According to one aspect of the present application, there is provided a method for presenting augmented reality content, the method comprising the steps of:
the method comprises the steps that first user equipment sends a preset feature image to network equipment and augmented reality content corresponding to the preset feature image;
The network equipment performs feature extraction on the preset feature image to obtain corresponding preset feature information;
the network equipment matches the preset characteristic information based on the target characteristic information extracted from the target image, and if the matching is successful, the augmented reality content is sent to the second user equipment; and
and the second user equipment determines pose information of the second user equipment based on the target characteristic information, and superimposes and presents the augmented reality content based on the pose information.
According to another aspect of the present application, there is provided a system for presenting augmented reality content, the system comprising any one of the first user devices, any one of the network devices and any one of the second user devices described above.
According to one aspect of the present application, there is provided a method for presenting augmented reality content, the method comprising the steps of:
the method comprises the steps that first user equipment sends augmented reality content to network equipment, wherein the augmented reality content comprises at least one target video;
the network equipment determines a preset feature image corresponding to the augmented reality content based on a feature frame in the target video, wherein the preset feature image comprises the feature frame; and
And the network equipment matches preset characteristic information of the preset characteristic image based on target characteristic information extracted from the target image, and if the matching is successful, the network equipment sends the augmented reality content to the second user equipment.
According to another aspect of the present application, there is provided a user device for presenting augmented reality content, the user device comprising:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the method of any of the above.
According to another aspect of the present application, there is provided a network device for presenting augmented reality content, the network device comprising:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the method of any of the above.
According to another aspect of the present application, there is provided a computer readable medium comprising instructions which, when executed, cause a system to perform the method of any of the above.
Compared with the prior art, based on the method for presenting the augmented reality content, the manufacturing process of the augmented reality content is simplified, enterprises can quickly manufacture the augmented reality content, and even if no professional technology and experience of programming are available, the enterprises can generate the required augmented reality content. The enterprise only needs to upload the augmented reality content (or upload the augmented reality content to a network device (such as a cloud server) together with a preset feature image for matching the augmented reality content), so that other users can view the augmented reality content, and the cost and time required for developing the augmented reality content are greatly reduced.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
FIG. 1 is a flow chart of a method for presenting augmented reality content at a network device side in one embodiment of the present application;
FIG. 2 is a flow chart of sub-steps of a content matching step in another embodiment of the present application;
FIG. 3 is a flow chart of a method for presenting augmented reality content at a second user device side in one embodiment of the present application;
FIG. 4 is a flow chart of a method for presenting augmented reality content in one embodiment of the present application;
FIG. 5 is a flow chart of a method for presenting augmented reality content at a first user device side in one embodiment of the present application;
FIG. 6 is a flow chart of a method for presenting augmented reality content at a network device side in one embodiment of the present application;
FIG. 7 is a flow chart of a method for presenting augmented reality content in another embodiment of the present application;
FIG. 8 is a functional block diagram of a network device in one embodiment of the present application;
FIG. 9 is a functional sub-block diagram of a content matching module in accordance with another embodiment of the present application;
FIG. 10 is a functional block diagram of a second user device in one embodiment of the present application;
FIG. 11 is a functional block diagram of a first user device according to one embodiment of the present application;
FIG. 12 is a functional block diagram of a network device in one embodiment of the present application;
FIG. 13 illustrates an exemplary system that may be used to implement various embodiments herein.
The same or similar reference numbers in the drawings refer to the same or similar parts.
Detailed Description
The present application is described in further detail below with reference to the accompanying drawings.
In one typical configuration of the present application, the terminal, the device of the service network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The device referred to in the present application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product which can perform man-machine interaction with a user (such as man-machine interaction through a touch pad), for example, a smart phone, a tablet computer and the like, and the mobile electronic product can adopt any operating system, for example, an android operating system, an iOS operating system and the like. The network device comprises an electronic device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware of the electronic device comprises, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device and the like. The network device includes, but is not limited to, a computer, a network host, a single network server, a plurality of network server sets, or a cloud of servers; here, the Cloud is composed of a large number of computers or network servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, a virtual supercomputer composed of a group of loosely coupled computer sets. Including but not limited to the internet, wide area networks, metropolitan area networks, local area networks, VPN networks, wireless Ad Hoc networks (Ad Hoc networks), and the like. Preferably, the device may be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the above-described devices are merely examples, and that other devices now known or hereafter may be present as appropriate for the application, are intended to be within the scope of the present application and are incorporated herein by reference.
In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
According to one aspect of the present application, there is provided a method for presenting augmented reality content at a first user equipment side, the method comprising step S101. In step S101, a first user device sends a preset feature image and augmented reality content corresponding to the preset feature image to a corresponding network device. The preset characteristic image is used for being identified by other user equipment to match the augmented reality content, so that the augmented reality content is presented in a superposition mode. In some embodiments, a first user device or other device (such as the aforementioned network device) first extracts preset feature information (such as feature points) from the preset feature image, and the other user device performs feature extraction on the captured image, and matches the extracted feature information with the aforementioned preset feature information, so as to match the augmented reality content.
Among other things, the user devices (including but not limited to the first user device, the second user device, and other user devices described below) referred to in this application include but are not limited to smart phones, personal computers (including but not limited to desktop and notebook computers, for example), tablet computers, smart glasses, or helmets, and other computing devices. In some embodiments, the user device further comprises an imaging device for acquiring image information, the imaging device generally comprising a photosensitive element for converting an optical signal into an electrical signal, and optionally a light refraction/reflection component (e.g. a lens or lens assembly) for adjusting the propagation path of the incident light. To facilitate user operation, in some embodiments the user device further comprises display means for presenting to the user and/or for setting up enhanced interactive content, wherein in some embodiments the enhanced interactive content is presented superimposed on the target device, which is presented by the user device (e.g. transmission glasses or other user device with a display screen), e.g. the user device captures and presents an image of the target device (or target image) in real time and presents the enhanced interactive content superimposed at a specific location on the target device while presenting the image; the display device is a touch screen in some embodiments, and the touch screen can be used not only for outputting graphic images, but also as an input device of the user equipment to receive operation instructions (such as operation instructions for interacting with the enhanced interactive content). Of course, it should be understood by those skilled in the art that the input device of the user equipment is not limited to the touch screen, and other existing input technologies can be applied to the present application, and are also included in the protection scope of the present application and incorporated herein by reference. For example, in some embodiments, the input techniques for receiving user operational instructions are implemented based on voice control, gesture control, and/or eye tracking.
In some embodiments, before the step S101, the first user equipment first acquires a preset feature image and corresponding augmented reality content thereof; in step S101, the first user equipment sends the preset feature image and the augmented reality content to a network device.
When a first user device sends a preset feature image and corresponding augmented reality content to a network device, in some cases the augmented reality content for overlaid presentation by other user devices does not match the preset feature image (sometimes also referred to as an identification map) in size. For example, in the case where the augmented reality content is a video or an image (including a still image and a moving image), if the size of the augmented reality content is too large relative to the preset feature image, when the augmented reality content is presented by superposition by other user devices, the augmented reality content occupies an excessive display area, and even exceeds the display range of the device; if the size of the augmented reality content is too small, the augmented reality content is difficult to observe when the augmented reality content is presented by other user devices in a superimposed manner. Therefore, when the preset characteristic image and the augmented reality content are not matched in size, the fusion of the augmented reality content and the preset characteristic image is poor, and the superposition presentation effect is not ideal.
In view of the drawbacks described above, in some embodiments, the above method further includes step S102 (not shown). In step S102, the first user device adjusts the size of the augmented reality content corresponding to the preset feature image based on the size of the preset feature image, for example, adjusts the size of the augmented reality content based on the number of long and wide pixels of the preset feature image; subsequently in step S101, the first user device sends the preset feature image and the resized augmented reality content to the network device.
Specifically, in some embodiments, if the aspect ratio of the preset feature image and the augmented reality content in video/picture form (e.g., based on the number of pixels on the long and wide sides) is the same, the size of the augmented reality content is adjusted to be the same as the size of the preset feature image, so that after the preset feature image is identified by other user equipment, the augmented reality content is superimposed on the preset feature image. At this time, after other user devices scan the preset feature image, the user can only see the superimposed video, image and other augmented reality contents through the device, so as to realize an optimal viewing mode. If the two are different in proportion, at least one pair of edges (2 long edges or 2 short edges) of the augmented reality content is adjusted to be equal to one pair of edges of the preset characteristic image in size, the length-width ratio is kept unchanged, and the other pair of edges of the augmented reality content such as video, image and the like are adjusted in the same proportion. At this time, after the other user equipment scans the preset feature image, the user can see the superimposed augmented reality content through the equipment, and meanwhile, the local part of the preset feature image can be seen. In this way, when the size of the content to be superimposed is much larger or much smaller than the size of the preset feature image, the content to be superimposed can be appropriately adjusted, so that the content to be superimposed is prevented from exceeding the range of the preset feature image or the display area of the device too much, or is only superimposed on the small range corresponding to the preset feature image.
For the case of overlaying and presenting the video type of the augmented reality content, it is common practice to separately process the preset feature image (or recognition graph) and the video content and upload the preset feature image and the video type of the augmented reality content, respectively, which brings difficulty and inconvenience to the user in setting the augmented reality content. In view of this, in some embodiments, the above method further comprises step S103 (not shown). In step S103, the first user equipment determines a preset feature image based on a feature frame in the target video; subsequently, in step S101, the first user equipment sends the preset feature image and the augmented reality content corresponding to the preset feature image to the network equipment, where the augmented reality content includes the target video. By the method, on one hand, the user can make and release the augmented reality content more conveniently, and the user does not need to additionally search and upload preset characteristic images, so that the operation steps are reduced, the time is saved, and the method is quite convenient; on the other hand, since the preset feature image is selected from the video content, the fusion of the preset feature image and the video content is also good.
Specifically, in some embodiments, in step S103, the first user device determines a feature frame in the target video based on a feature frame specification operation of the user, and determines a preset feature image based on the feature frame. For example, a user will upload a video (referred to as a target video) from a first user device to a network device; and the other user equipment displays the target video in a superposition way when scanning an image matched with the preset characteristic image so as to be watched by other users. At this time, the user selects one of the frames (referred to as a feature frame) on the time axis of the target video as a preset feature image or a part of the preset feature image. For example, in the case of a wedding video, a user drags on a time axis, selects one of the frames that is most representative (e.g., at some key point in time) during the wedding as a feature frame, and prepares a preset feature image based on the feature frame.
In yet other embodiments, in step S103, the first user device determines a feature frame in the target video, and determines a preset feature image based on the feature frame, wherein the feature frame satisfies a preset feature information condition. For example, the first user equipment traverses all frames in the target video as candidate frames, or extracts a plurality of frames in the target video as candidate frames, and performs a judging operation on the candidate frames, wherein the frames meeting the preset feature information condition are taken as feature frames, for example, feature extraction is performed on the candidate frames, the frame with the most corresponding feature points is taken as the feature frame, or the frame with the most feature points in the candidate frames is taken as the feature frame, and then a preset feature image is prepared based on the feature frame.
Of course, those skilled in the art will appreciate that the above-described manner of determining the preset feature image based on the target video is merely by way of example and not by way of limitation. Any other way of determining the preset feature image based on the target video that may be present in the present application or in the future, if applicable, is also included in the scope of the present application and is incorporated herein by reference. For example, in step S103, the first user equipment first traverses all frames in the target video, selects a plurality of frames satisfying the above (or other) preset feature information conditions as candidate frames and provides the candidate frames to the user, and then the user selects one of the candidate frames as a feature frame, where the preset feature image is made based on the feature frame selected by the user.
In some embodiments, the above method further comprises step S104 (not shown). In step S104, the first user device sends presentation permission information about the augmented reality content to the network device, where the presentation permission information is used to determine presentation rights of other user devices about the augmented reality content. Wherein the presentation permission information is used to control the rights of other user devices to view the augmented reality content, for example, to specify that one or more other users are able to view, or to specify that one or more users are unable to view. Specific implementations include, but are not limited to, setting a blacklist or whitelist based on the user device number, IP address or MAC address, user account ID, etc. of other users, or setting an access password for the augmented reality content, or setting conditions that can/cannot be satisfied by the user (e.g., determining whether the relevant user is located in a certain group according to the user device number, IP address or MAC address, user account ID, etc., or determining the geographic location of the user according to the IP address, GPS sensing information, etc. of the user device requesting access to the augmented reality content).
According to another aspect of the present application, a method for presenting augmented reality content at a network device side is provided. Referring to fig. 1, the method includes step S201, step S202, and step S203.
In step S201, a network device receives a preset feature image sent by a first user device and augmented reality content corresponding to the preset feature image. The preset characteristic image is used for being identified by other user equipment to match the augmented reality content, so that the augmented reality content is presented in a superposition mode.
In step S202, the network device performs feature extraction on the preset feature image to obtain corresponding preset feature information (e.g., feature points). In some embodiments, the other user equipment performs feature extraction on the captured image, and matches the extracted feature information with the preset feature information, so as to match the augmented reality content.
In step S203, the network device matches the preset feature information based on the target feature information extracted from the target image, and if the matching is successful, sends the augmented reality content to a second user device (for example, a user device for capturing the target image). In some embodiments, the target image is captured by other user devices, and the target feature information is extracted from the target image by other user devices capturing the target image, or by the network device.
Similar to the above, when the first user device sends the preset feature image and the corresponding augmented reality content to the network device, in some cases the augmented reality content for the overlaid presentation by the other user devices does not match the preset feature image (sometimes also referred to as an identification map) in size. Thus, in some embodiments, the network device adjusts the size of the augmented reality content based on the size of the preset feature image; the manner in which the network device adjusts the augmented reality content is the same or substantially the same as the manner in which the first user device adjusts the augmented reality content described above, which is not described in detail herein, and is incorporated herein by reference.
In some embodiments, referring to fig. 2, step S203 includes sub-step S203a and sub-step S203b. In sub-step S203a, the network device receives an augmented reality content request sent by a second user device, wherein the augmented reality content request comprises target feature information, which is extracted from a target image. For example, the second user device captures a target image, and performs feature extraction on the target image to obtain corresponding target feature information (e.g., feature points); and then, the second user equipment requests corresponding augmented reality content from the network equipment, wherein the content request sent by the second user equipment to the network equipment contains the target feature information, and the target feature information is used for matching with preset feature information so as to further determine the corresponding augmented reality content.
In sub-step S203b, the network device matches the preset feature information based on the target feature information, and if the matching is successful, sends the augmented reality content to the second user device.
In some embodiments, the above method further comprises step S205 (not shown). In step S205, the network device receives presentation permission information about the augmented reality content transmitted by the first user device. Accordingly, in sub-step S203a, the network device receives an augmented reality content request sent by the second user device, wherein the augmented reality content request comprises target feature information and permission verification information (e.g. a device number, an IP address or a MAC address, a user account ID, etc. of the second user device, or an access password to be verified), the target feature information being extracted from the target image; in sub-step S203b, the network device matches the preset feature information based on the target feature information when the license verification information matches the presentation license information, and if the matching is successful, sends the augmented reality content to the second user device. Wherein the presentation permission information is used to control the rights of other user devices to view the augmented reality content, for example, to specify that one or more other users are able to view, or to specify that one or more users are unable to view. Specific implementations include, but are not limited to, setting a blacklist or whitelist based on the user device number, IP address or MAC address, user account ID, etc. of other users, or setting conditions that can/cannot be met by users (e.g., determining whether the relevant user is located in a certain group based on the user device number, IP address or MAC address, user account ID, etc., or determining the geographic location of the user based on the IP address, GPS sensing information, etc. of the user device requesting access to the augmented reality content).
In some embodiments, the above method further comprises step S206 (not shown). In step S206, the network device receives the presentation permission information about the augmented reality content sent by the first user device, where the presentation permission information includes a remaining number of presentations, for example, the augmented reality content cannot be presented infinitely or cannot be presented on an unlimited number of other user devices, and the user making the augmented reality content sets a number of times that the corresponding augmented reality content can still be played based on the remaining number of presentations, for example, sets a total number of plays (each time the augmented reality content is accessed 1 time, the remaining number of presentations is reduced by 1), or sets a number of user devices capable of accessing the augmented reality content (each time the augmented reality content is accessed by 1 new user device, the remaining number of presentations is reduced by 1). Accordingly, in sub-step S203b, the network device matches the preset feature information based on the target feature information when the remaining number of presentations is greater than zero; and if the matching is successful, sending the augmented reality content to the second user equipment, and updating the residual presentation times.
According to another aspect of the present application, there is also provided a method for presenting augmented reality content at a second user device side. Referring to fig. 3, the method includes step S301, step S302, and step S303.
Wherein in step S301, the second user device performs feature extraction based on the target image to obtain corresponding target feature information (e.g., feature points). In step S302, the second user device sends the target feature information to the network device, and receives the augmented reality content returned by the network device. In step S303, the second user device determines pose information of the second user device based on the target feature information, and superimposes and presents the augmented reality content based on the pose information. Here, the pose information includes spatial position information and pose information of the user equipment currently with respect to the target device. For example, the network device returns preset feature information corresponding to the augmented reality content to the second user device, the second user device determines pose information of the second user device in space based on the preset feature information and the target feature information, and superimposes and presents the augmented reality content at a corresponding position on the display device based on position information of the augmented reality content in space (sometimes also based on pose information of the augmented reality content in space). For another example, the second user equipment transmits the target feature information to the network equipment; after receiving the target feature information, the network equipment determines pose information of the second user equipment in space based on the target feature information and corresponding preset feature information so as to enable the second user equipment to display corresponding augmented reality content in a superposition mode based on the pose information; in other words, the second user equipment acquires pose information returned by the network equipment based on the target feature information, and superimposes and presents the augmented reality content based on the pose information.
In some embodiments, step S302 includes sub-step S302a (not shown) and sub-step S302b (not shown). In sub-step S302a, the second user device sends an augmented reality content request to the network device, wherein the augmented reality content request comprises the target feature information; in sub-step S302b, the second user device receives the augmented reality content returned by the network device. For example, the second user device captures a target image, and performs feature extraction on the target image to obtain corresponding target feature information (e.g., feature points); and then, the second user equipment requests corresponding augmented reality content from the network equipment, wherein the content request sent by the second user equipment to the network equipment contains the target feature information, and the target feature information is used for matching with preset feature information so as to further determine the corresponding augmented reality content.
In some embodiments, the extracting of the target feature information from the target image is performed by the network device, and the target image is included in an augmented reality content request sent by the second user device to the network device. Accordingly, in some embodiments, the above method does not include step S301. Wherein in step S302, a second user device sends an augmented reality content request to a network device, and receives augmented reality content returned by the network device and pose information about the second user device, wherein the augmented reality content request includes a target image; in step S303, the second user device superimposes and presents the augmented reality content based on the pose information.
In some embodiments, the augmented reality content request further includes license verification information (e.g., a device number, IP address or MAC address, user account ID, access password to be verified, etc. of the second user device) for determining whether the second user device is authorized to access the augmented reality content.
For ease of understanding, fig. 4 illustrates a method for rendering augmented reality content with which various devices cooperate in some embodiments. The method comprises the following steps:
the method comprises the steps that first user equipment sends a preset feature image to network equipment and augmented reality content corresponding to the preset feature image;
the network equipment performs feature extraction on the preset feature image to obtain corresponding preset feature information;
the second user equipment performs feature extraction based on the target image to obtain corresponding target feature information, and sends the target feature information to the network equipment;
the network equipment matches the preset characteristic information based on the target characteristic information, and if the matching is successful, the augmented reality content is sent to the second user equipment; and
and the second user equipment determines pose information of the second user equipment based on the target characteristic information, and superimposes and presents the augmented reality content based on the pose information.
As described above, in the case of overlaying the augmented reality contents of the presented video type, it is common practice to separately process the preset feature image (or recognition map) and the video contents and upload the preset feature image and the augmented reality contents of the video type, respectively, which brings difficulty and inconvenience to the user in setting the augmented reality contents. In view of this, according to one aspect of the present application, a method for presenting augmented reality content at a first user device side is provided. The method comprises step S401. In step S401, a first user device sends augmented reality content to a network device, the augmented reality content comprising at least one target video. The network device obtains a preset feature image corresponding to the target video according to one frame (called a feature frame) in the target video sent by the first user device, and the first user device does not need to perform further processing (such as designating the preset feature image). By the method, on one hand, the user can make and release the augmented reality content more conveniently, and the user does not need to additionally search and upload preset characteristic images, so that the operation steps are reduced, the time is saved, and the method is quite convenient; on the other hand, since the preset feature image is selected from the video content, the fusion of the preset feature image and the video content is also good.
In some embodiments, referring to fig. 5, the method further comprises step S402. In step S402, the first user device receives a preset feature image returned by the network device, where the preset feature image includes a feature frame in the target video, and the preset feature image is used for the user to provide to other users (for example, through blogs, social software, emails, etc.), so that the other users access the augmented reality content.
Accordingly, in accordance with another aspect of the present application, a method for presenting augmented reality content at a network device side is provided. Referring to fig. 6, the method includes step S501, step S502, and step S503. In step S501, the network device receives augmented reality content transmitted by the first user device, the augmented reality content including at least one target video. In step S502, the network device determines a preset feature image corresponding to the augmented reality content based on the feature frame in the target video, where the preset feature image includes the feature frame. In some embodiments, a network device determines feature frames that are satisfied in a target video and determines a preset feature image based on the feature frames, wherein the feature frames satisfy preset feature information conditions. For example, the network device traverses all frames in the target video as candidate frames, or extracts a plurality of frames in the target video as candidate frames, performs a judging operation on the candidate frames, uses a frame satisfying a preset feature information condition as a feature frame, for example, performs feature extraction on the candidate frames, uses a frame with the largest corresponding feature points as a feature frame, or uses a frame with the largest feature points in the candidate frames as a feature frame, and prepares a preset feature image based on the feature frame. In step S503, the network device matches preset feature information of the preset feature image based on the target feature information extracted from the target image, and if the matching is successful, sends the augmented reality content to the second user device. In some embodiments, the target feature information (e.g., feature points) is obtained by the second user device performing feature extraction based on the target image.
For ease of understanding, fig. 7 illustrates a method for rendering augmented reality content that various devices cooperate with in some embodiments. The method comprises the following steps:
the method comprises the steps that first user equipment sends augmented reality content to network equipment, wherein the augmented reality content comprises at least one target video;
the network equipment determines a preset feature image corresponding to the augmented reality content based on a feature frame in the target video, wherein the preset feature image comprises the feature frame; and
and the network equipment matches preset characteristic information of the preset characteristic image based on target characteristic information extracted from the target image, and if the matching is successful, the network equipment sends the augmented reality content to the second user equipment.
According to one aspect of the present application, there is provided a first user device for presenting augmented reality content, the device comprising a content sending module 101. The content sending module 101 sends the preset feature image and the augmented reality content corresponding to the preset feature image to the corresponding network device. The preset characteristic image is used for being identified by other user equipment to match the augmented reality content, so that the augmented reality content is presented in a superposition mode. In some embodiments, a first user device or other device (such as the aforementioned network device) first extracts preset feature information (such as feature points) from the preset feature image, and the other user device performs feature extraction on the captured image, and matches the extracted feature information with the aforementioned preset feature information, so as to match the augmented reality content.
In some embodiments, before the foregoing steps, the first user device first obtains a preset feature image and corresponding augmented reality content thereof; the content sending module 101 sends the preset feature image and the augmented reality content to a network device.
When a first user device sends a preset feature image and corresponding augmented reality content to a network device, in some cases the augmented reality content for overlaid presentation by other user devices does not match the preset feature image (sometimes also referred to as an identification map) in size. For example, in the case where the augmented reality content is a video or an image (including a still image and a moving image), if the size of the augmented reality content is too large relative to the preset feature image, when the augmented reality content is presented by superposition by other user devices, the augmented reality content occupies an excessive display area, and even exceeds the display range of the device; if the size of the augmented reality content is too small, the augmented reality content is difficult to observe when the augmented reality content is presented by other user devices in a superimposed manner. Therefore, when the preset characteristic image and the augmented reality content are not matched in size, the fusion of the augmented reality content and the preset characteristic image is poor, and the superposition presentation effect is not ideal.
In view of the drawbacks described above, in some embodiments, the apparatus further includes a resizing module 102 (not shown). The size adjustment module 102 adjusts the size of the augmented reality content corresponding to the preset feature image based on the size of the preset feature image, for example, adjusts the size of the augmented reality content based on the number of long and wide pixels of the preset feature image; the content transmitting module 101 then transmits the preset feature image and the resized augmented reality content to the network device.
Specifically, in some embodiments, if the aspect ratio of the preset feature image and the augmented reality content in video/picture form (e.g., based on the number of pixels on the long and wide sides) is the same, the size of the augmented reality content is adjusted to be the same as the size of the preset feature image, so that after the preset feature image is identified by other user equipment, the augmented reality content is superimposed on the preset feature image. At this time, after other user devices scan the preset feature image, the user can only see the superimposed video, image and other augmented reality contents through the device, so as to realize an optimal viewing mode. If the two are different in proportion, at least one pair of edges (2 long edges or 2 short edges) of the augmented reality content is adjusted to be equal to one pair of edges of the preset characteristic image in size, the length-width ratio is kept unchanged, and the other pair of edges of the augmented reality content such as video, image and the like are adjusted in the same proportion. At this time, after the other user equipment scans the preset feature image, the user can see the superimposed augmented reality content through the equipment, and meanwhile, the local part of the preset feature image can be seen. In this way, when the size of the content to be superimposed is much larger or much smaller than the size of the preset feature image, the content to be superimposed can be appropriately adjusted, so that the content to be superimposed is prevented from exceeding the range of the preset feature image or the display area of the device too much, or is only superimposed on the small range corresponding to the preset feature image.
For the case of overlaying and presenting the video type of the augmented reality content, it is common practice to separately process the preset feature image (or recognition graph) and the video content and upload the preset feature image and the video type of the augmented reality content, respectively, which brings difficulty and inconvenience to the user in setting the augmented reality content. In view of this, in some embodiments, the apparatus described above further comprises an image determination module 103 (not shown). The image determining module 103 determines a preset feature image based on the feature frames in the target video; and then the content sending module 101 sends the preset feature image and the augmented reality content corresponding to the preset feature image to a network device, wherein the augmented reality content comprises the target video. By the method, on one hand, the user can make and release the augmented reality content more conveniently, and the user does not need to additionally search and upload preset characteristic images, so that the operation steps are reduced, the time is saved, and the method is quite convenient; on the other hand, since the preset feature image is selected from the video content, the fusion of the preset feature image and the video content is also good.
Specifically, in some embodiments, the image determination module 103 determines a feature frame in the target video based on a feature frame specification operation by the user, and determines a preset feature image based on the feature frame. For example, a user will upload a video (referred to as a target video) from a first user device to a network device; and the other user equipment displays the target video in a superposition way when scanning an image matched with the preset characteristic image so as to be watched by other users. At this time, the user selects one of the frames (referred to as a feature frame) on the time axis of the target video as a preset feature image or a part of the preset feature image. For example, in the case of a wedding video, a user drags on a time axis, selects one of the frames that is most representative (e.g., at some key point in time) during the wedding as a feature frame, and prepares a preset feature image based on the feature frame.
In yet other embodiments, the image determination module 103 determines a feature frame in the target video and determines a preset feature image based on the feature frame, wherein the feature frame satisfies a preset feature information condition. For example, the first user equipment traverses all frames in the target video as candidate frames, or extracts a plurality of frames in the target video as candidate frames, and performs a judging operation on the candidate frames, wherein the frames meeting the preset feature information condition are taken as feature frames, for example, feature extraction is performed on the candidate frames, the frame with the most corresponding feature points is taken as the feature frame, or the frame with the most feature points in the candidate frames is taken as the feature frame, and then a preset feature image is prepared based on the feature frame.
Of course, those skilled in the art will appreciate that the above-described manner of determining the preset feature image based on the target video is merely by way of example and not by way of limitation. Any other way of determining the preset feature image based on the target video that may be present in the present application or in the future, if applicable, is also included in the scope of the present application and is incorporated herein by reference. For example, the image determining module 103 traverses all frames in the target video, selects a plurality of frames satisfying the above (or other) preset feature information conditions as candidate frames and provides the candidate frames to the user, and then the user selects one of the candidate frames as a feature frame, where the preset feature image is made based on the feature frame selected by the user.
In some embodiments, the apparatus further comprises a license sending module 104 (not shown). The license sending module 104 sends presentation license information regarding the augmented reality content to the network device, the presentation license information being used to determine presentation rights of other user devices with respect to the augmented reality content. Wherein the presentation permission information is used to control the rights of other user devices to view the augmented reality content, for example, to specify that one or more other users are able to view, or to specify that one or more users are unable to view. Specific implementations include, but are not limited to, setting a blacklist or whitelist based on the user device number, IP address or MAC address, user account ID, etc. of other users, or setting an access password for the augmented reality content, or setting conditions that can/cannot be satisfied by the user (e.g., determining whether the relevant user is located in a certain group according to the user device number, IP address or MAC address, user account ID, etc., or determining the geographic location of the user according to the IP address, GPS sensing information, etc. of the user device requesting access to the augmented reality content).
According to another aspect of the present application, a network device for presenting augmented reality content is provided. Referring to fig. 8, the apparatus includes a content receiving module 201, a feature extracting module 202, and a content matching module 203.
The content receiving module 201 receives a preset feature image sent by a first user device and augmented reality content corresponding to the preset feature image. The preset characteristic image is used for being identified by other user equipment to match the augmented reality content, so that the augmented reality content is presented in a superposition mode.
The feature extraction module 202 performs feature extraction on the preset feature image to obtain corresponding preset feature information (e.g., feature points). In some embodiments, the other user equipment performs feature extraction on the captured image, and matches the extracted feature information with the preset feature information, so as to match the augmented reality content.
The content matching module 203 matches the preset feature information based on the target feature information extracted from the target image, and if the matching is successful, sends the augmented reality content to a second user device (for example, a user device for capturing the target image). In some embodiments, the target image is captured by other user devices, and the target feature information is extracted from the target image by other user devices capturing the target image, or by the network device.
Similar to the above, when the first user device sends the preset feature image and the corresponding augmented reality content to the network device, in some cases the augmented reality content for the overlaid presentation by the other user devices does not match the preset feature image (sometimes also referred to as an identification map) in size. Thus, in some embodiments, the network device adjusts the size of the augmented reality content based on the size of the preset feature image; the manner in which the network device adjusts the augmented reality content is the same or substantially the same as the manner in which the first user device adjusts the augmented reality content described above, which is not described in detail herein, and is incorporated herein by reference.
In some embodiments, referring to fig. 9, the content matching module 203 includes a request receiving unit 203a and a content matching unit 203b. The request receiving unit 203a receives an augmented reality content request sent by the second user device, wherein the augmented reality content request includes target feature information, and the target feature information is extracted from the target image. For example, the second user device captures a target image, and performs feature extraction on the target image to obtain corresponding target feature information (e.g., feature points); and then, the second user equipment requests corresponding augmented reality content from the network equipment, wherein the content request sent by the second user equipment to the network equipment contains the target feature information, and the target feature information is used for matching with preset feature information so as to further determine the corresponding augmented reality content.
The content matching unit 203b matches the preset feature information based on the target feature information, and if the matching is successful, sends the augmented reality content to the second user equipment.
In some embodiments, the apparatus further comprises a license receiving module 205 (not shown). A license receiving module 205 receives presentation license information regarding the augmented reality content transmitted by the first user device. Accordingly, the request receiving unit 203a receives an augmented reality content request sent by the second user device, where the augmented reality content request includes target feature information and license verification information (for example, a device number, an IP address or a MAC address, a user account ID, etc. of the second user device, or an access password to be verified), the target feature information being extracted from the target image; and when the license verification information is matched with the presentation license information, the content matching unit 203b matches the preset feature information based on the target feature information, and if the matching is successful, the augmented reality content is sent to the second user equipment. Wherein the presentation permission information is used to control the rights of other user devices to view the augmented reality content, for example, to specify that one or more other users are able to view, or to specify that one or more users are unable to view. Specific implementations include, but are not limited to, setting a blacklist or whitelist based on the user device number, IP address or MAC address, user account ID, etc. of other users, or setting conditions that can/cannot be met by users (e.g., determining whether the relevant user is located in a certain group based on the user device number, IP address or MAC address, user account ID, etc., or determining the geographic location of the user based on the IP address, GPS sensing information, etc. of the user device requesting access to the augmented reality content).
In some embodiments, the apparatus further includes a license receiving module 206 (not shown). The license receiving module 206 receives the presentation license information about the augmented reality content sent by the first user device, where the presentation license information includes a remaining number of presentations, for example, the augmented reality content cannot be presented infinitely or cannot be presented on an unlimited number of other user devices, and a user who makes the augmented reality content sets a number of times that the corresponding augmented reality content can still be played based on the remaining number of presentations, for example, sets a total number of plays (each time the augmented reality content is accessed 1 time, the remaining number of presentations is reduced by 1), or sets a number of user devices capable of accessing the augmented reality content (each time the augmented reality content is accessed by 1 new user device, the remaining number of presentations is reduced by 1). Accordingly, when the remaining number of presentations is greater than zero, the content matching unit 203b matches the preset feature information based on the target feature information; and if the matching is successful, sending the augmented reality content to the second user equipment, and updating the residual presentation times.
According to another aspect of the present application, there is also provided a second user device for presenting augmented reality content. Referring to fig. 10, the apparatus includes a target feature extraction module 301, a content acquisition module 302, and a content presentation module 303.
Wherein, the target feature extraction module 301 performs feature extraction based on the target image to obtain corresponding target feature information (e.g., feature points). The content acquisition module 302 sends the target feature information to the network device, and receives the augmented reality content returned by the network device. The content presentation module 303 determines pose information of the second user device based on the target feature information, and superimposes and presents the augmented reality content based on the pose information. Here, the pose information includes spatial position information and pose information of the user equipment currently with respect to the target device. For example, the network device returns preset feature information corresponding to the augmented reality content to the second user device, the second user device determines pose information of the second user device in space based on the preset feature information and the target feature information, and superimposes and presents the augmented reality content at a corresponding position on the display device based on position information of the augmented reality content in space (sometimes also based on pose information of the augmented reality content in space). For another example, the second user equipment transmits the target feature information to the network equipment; after receiving the target feature information, the network equipment determines pose information of the second user equipment in space based on the target feature information and corresponding preset feature information so as to enable the second user equipment to display corresponding augmented reality content in a superposition mode based on the pose information; in other words, the second user equipment acquires pose information returned by the network equipment based on the target feature information, and superimposes and presents the augmented reality content based on the pose information.
In some embodiments, the content acquisition module 302 includes a content request unit 302a (not shown) and a content receiving unit 302b (not shown). The content request unit 302a sends an augmented reality content request to a network device, wherein the augmented reality content request comprises the target feature information; the content receiving unit 302b receives the augmented reality content returned by the network device. For example, the second user device captures a target image, and performs feature extraction on the target image to obtain corresponding target feature information (e.g., feature points); and then, the second user equipment requests corresponding augmented reality content from the network equipment, wherein the content request sent by the second user equipment to the network equipment contains the target feature information, and the target feature information is used for matching with preset feature information so as to further determine the corresponding augmented reality content.
In some embodiments, the extracting of the target feature information from the target image is performed by the network device, and the target image is included in an augmented reality content request sent by the second user device to the network device. Accordingly, in some embodiments, the apparatus described above does not include the target feature extraction module 301. The content acquisition module 302 sends an augmented reality content request to a network device, and receives the augmented reality content returned by the network device and pose information about the second user device, wherein the augmented reality content request comprises a target image; the content presentation module 303 superimposes and presents the augmented reality content based on the pose information.
In some embodiments, the augmented reality content request further includes license verification information (e.g., a device number, IP address or MAC address, user account ID, access password to be verified, etc. of the second user device) for determining whether the second user device is authorized to access the augmented reality content.
According to another aspect of the present application, there is provided a system for presenting augmented reality content, comprising any one of the first user devices, any one of the second user devices and any one of the third user devices described above. For ease of understanding, fig. 4 illustrates a method for rendering augmented reality content with which various devices cooperate in some embodiments. The method comprises the following steps:
the method comprises the steps that first user equipment sends a preset feature image to network equipment and augmented reality content corresponding to the preset feature image;
the network equipment performs feature extraction on the preset feature image to obtain corresponding preset feature information;
the network equipment matches the preset characteristic information based on the target characteristic information extracted from the target image, and if the matching is successful, the augmented reality content is sent to the second user equipment; and
And the second user equipment determines pose information of the second user equipment based on the target characteristic information, and superimposes and presents the augmented reality content based on the pose information.
As described above, in the case of overlaying the augmented reality contents of the presented video type, it is common practice to separately process the preset feature image (or recognition map) and the video contents and upload the preset feature image and the augmented reality contents of the video type, respectively, which brings difficulty and inconvenience to the user in setting the augmented reality contents. In view of this, according to one aspect of the present application, there is provided a first user device for presenting augmented reality content. The device comprises a content transmission module 401. The content transmission module 401 transmits the augmented reality content to the network device, the augmented reality content including at least one target video. The network device obtains a preset feature image corresponding to the target video according to one frame (called a feature frame) in the target video sent by the first user device, and the first user device does not need to perform further processing (such as designating the preset feature image). By the method, on one hand, the user can make and release the augmented reality content more conveniently, and the user does not need to additionally search and upload preset characteristic images, so that the operation steps are reduced, the time is saved, and the method is quite convenient; on the other hand, since the preset feature image is selected from the video content, the fusion of the preset feature image and the video content is also good.
In some embodiments, referring to fig. 11, the apparatus further comprises a preset image receiving module 402. The preset image receiving module 402 receives a preset feature image returned by the network device, where the preset feature image includes a feature frame in the target video, and the preset feature image is used for providing the user to other users (for example, through blogs, social software, emails, etc.), so that the other users access the augmented reality content.
Accordingly, in accordance with another aspect of the present application, a network device for presenting augmented reality content is provided. Referring to fig. 12, the apparatus includes a content receiving module 501, a feature extracting module 502, and a content matching module 503. The content receiving module 501 receives augmented reality content transmitted by a first user device, the augmented reality content including at least one target video. The feature extraction module 502 determines a preset feature image corresponding to the augmented reality content based on a feature frame in the target video, where the preset feature image includes the feature frame. In some embodiments, a network device determines feature frames that are satisfied in a target video and determines a preset feature image based on the feature frames, wherein the feature frames satisfy preset feature information conditions. For example, the network device traverses all frames in the target video as candidate frames, or extracts a plurality of frames in the target video as candidate frames, performs a judging operation on the candidate frames, uses a frame satisfying a preset feature information condition as a feature frame, for example, performs feature extraction on the candidate frames, uses a frame with the largest corresponding feature points as a feature frame, or uses a frame with the largest feature points in the candidate frames as a feature frame, and prepares a preset feature image based on the feature frame. The content matching module 503 matches preset feature information of the preset feature image based on the target feature information extracted from the target image, and if the matching is successful, sends the augmented reality content to the second user equipment. In some embodiments, the target feature information (e.g., feature points) is obtained by the second user device performing feature extraction based on the target image.
According to another aspect of the present application, there is also provided a system for presenting augmented reality content, comprising any one of the first user devices, any one of the network devices and any one of the second user devices described above. For ease of understanding, fig. 7 illustrates a method for rendering augmented reality content that various devices cooperate with in some embodiments. The method comprises the following steps:
the method comprises the steps that first user equipment sends augmented reality content to network equipment, wherein the augmented reality content comprises at least one target video;
the network equipment determines a preset feature image corresponding to the augmented reality content based on a feature frame in the target video, wherein the preset feature image comprises the feature frame; and
and the network equipment matches preset characteristic information of the preset characteristic image based on target characteristic information extracted from the target image, and if the matching is successful, the network equipment sends the augmented reality content to the second user equipment.
The present application also provides a computer readable storage medium storing computer code which, when executed, performs a method as claimed in any preceding claim.
The present application also provides a computer program product which, when executed by a computer device, performs a method as claimed in any preceding claim.
The present application also provides a computer device comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 13 illustrates an exemplary system that can be used to implement various embodiments described herein.
As shown in fig. 13, in some embodiments, the system 600 can function as any one of the user devices or network devices of the various described embodiments. In some embodiments, system 600 can include one or more computer-readable media (e.g., system memory or NVM/storage 620) having instructions and one or more processors (e.g., processor(s) 605) coupled with the one or more computer-readable media and configured to execute the instructions to implement the modules to perform the actions described herein.
For one embodiment, the system control module 610 may include any suitable interface controller to provide any suitable interface to at least one of the processor(s) 605 and/or any suitable device or component in communication with the system control module 610.
The system control module 610 may include a memory controller module 630 to provide an interface to the system memory 615. The memory controller module 630 may be a hardware module, a software module, and/or a firmware module.
The system memory 615 may be used to load and store data and/or instructions for the system 600, for example. For one embodiment, the system memory 615 may include any suitable volatile memory, such as, for example, a suitable DRAM. In some embodiments, the system memory 615 may comprise a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, the system control module 610 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 620 and communication interface(s) 625.
For example, NVM/storage 620 may be used to store data and/or instructions. NVM/storage 620 may include any suitable nonvolatile memory (e.g., flash memory) and/or may include any suitable nonvolatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 620 may include storage resources that are physically part of the device on which system 600 is installed or which may be accessed by the device without being part of the device. For example, NVM/storage 620 may be accessed over a network via communication interface(s) 625.
Communication interface(s) 625 may provide an interface for system 600 to communicate over one or more networks and/or with any other suitable device. The system 600 may wirelessly communicate with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 605 may be packaged with logic of one or more controllers (e.g., memory controller module 630) of the system control module 610. For one embodiment, at least one of the processor(s) 605 may be packaged together with logic of one or more controllers of the system control module 610 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 605 may be integrated on the same die as the logic of the one or more controllers of the system control module 610. For one embodiment, at least one of the processor(s) 605 may be integrated on the same die as logic of one or more controllers of the system control module 610 to form a system on a chip (SoC).
In various embodiments, system 600 may be, but is not limited to being: a server, workstation, desktop computing device, or mobile computing device (e.g., laptop computing device, handheld computing device, tablet, netbook, etc.). In various embodiments, system 600 may have more or fewer components and/or different architectures. For example, in some embodiments, system 600 includes one or more cameras, keyboards, liquid Crystal Display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application Specific Integrated Circuits (ASICs), and speakers.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, using Application Specific Integrated Circuits (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions as described above. Likewise, the software programs of the present application (including associated data structures) may be stored on a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. In addition, some steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Furthermore, portions of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application by way of operation of the computer. Those skilled in the art will appreciate that the form of computer program instructions present in a computer readable medium includes, but is not limited to, source files, executable files, installation package files, etc., and accordingly, the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Herein, a computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
Communication media includes media whereby a communication signal containing, for example, computer readable instructions, data structures, program modules, or other data, is transferred from one system to another. Communication media may include conductive transmission media such as electrical cables and wires (e.g., optical fibers, coaxial, etc.) and wireless (non-conductive transmission) media capable of transmitting energy waves, such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied as a modulated data signal, for example, in a wireless medium, such as a carrier wave or similar mechanism, such as that embodied as part of spread spectrum technology. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory, such as random access memory (RAM, DRAM, SRAM); and nonvolatile memory such as flash memory, various read only memory (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, feRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed computer-readable information/data that can be stored for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to operate a method and/or a solution according to the embodiments of the present application as described above.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.

Claims (34)

1. A method at a first user device for presenting augmented reality content, wherein the method comprises:
when the size of the preset feature image is not matched with that of the augmented reality content, the size of the augmented reality content corresponding to the preset feature image is adjusted based on the size of the preset feature image;
sending a preset feature image and augmented reality content corresponding to the preset feature image to a network device, wherein the augmented reality content is used for being overlapped and presented by other user devices, and sending the preset feature image and the augmented reality content corresponding to the preset feature image to the network device, and the augmented reality content is used for being overlapped and presented by other user devices, and comprises the following steps: and sending the preset characteristic image and the size-adjusted augmented reality content to network equipment, wherein the size-adjusted augmented reality content is used for superposition and presentation of other user equipment.
2. The method of claim 1, wherein the method further comprises:
determining a preset feature image based on a feature frame in the target video;
the sending, to a network device, a preset feature image and an augmented reality content corresponding to the preset feature image, where the augmented reality content is used for being overlapped and presented by other user devices, and the method includes:
Transmitting the preset feature image and the augmented reality content corresponding to the preset feature image to network equipment, wherein the augmented reality content comprises the target video; the augmented reality content is for overlaid presentation by other user devices.
3. The method of claim 2, wherein the determining the preset feature image based on the feature frames in the target video comprises:
and determining a characteristic frame in the target video based on the characteristic frame designating operation of the user, and determining a preset characteristic image based on the characteristic frame.
4. The method of claim 2, wherein the determining the preset feature image based on the feature frames in the target video comprises:
and determining a characteristic frame in the target video, and determining a preset characteristic image based on the characteristic frame, wherein the characteristic frame meets a preset characteristic information condition.
5. The method of claim 1, wherein the method further comprises:
and sending presentation permission information about the augmented reality content to the network device, wherein the presentation permission information is used for determining presentation rights of other user devices about the augmented reality content.
6. A method at a network device for presenting augmented reality content, wherein the method comprises:
Receiving a preset feature image and augmented reality content corresponding to the preset feature image, which are sent by first user equipment, wherein when the preset feature image is not matched with the augmented reality content in size, the first user equipment adjusts the size of the augmented reality content corresponding to the preset feature image based on the size of the preset feature image, sends the preset feature image and the augmented reality content with the adjusted size to network equipment, and the augmented reality content with the adjusted size is used for superposition and presentation of other user equipment;
extracting the characteristics of the preset characteristic images to obtain corresponding preset characteristic information;
and matching the preset characteristic information based on the target characteristic information extracted from the target image, and if the matching is successful, sending the augmented reality content to the second user equipment.
7. The method of claim 6, wherein the method further comprises:
and adjusting the size of the augmented reality content based on the size of the preset feature image.
8. The method of claim 6, wherein the matching the preset feature information based on the target feature information extracted from the target image, if the matching is successful, sending the augmented reality content to a second user device, comprises:
Receiving an augmented reality content request sent by second user equipment, wherein the augmented reality content request comprises target feature information, and the target feature information is extracted from a target image;
and matching the preset characteristic information based on the target characteristic information, and if the matching is successful, sending the augmented reality content to the second user equipment.
9. The method of claim 8, wherein the method further comprises:
receiving presentation permission information about the augmented reality content sent by the first user equipment;
the receiving an augmented reality content request sent by a second user device, wherein the augmented reality content request includes target feature information, the target feature information is extracted from a target image, and the method comprises the following steps:
receiving an augmented reality content request sent by second user equipment, wherein the augmented reality content request comprises target feature information and license verification information, and the target feature information is extracted from a target image;
the step of matching the preset feature information based on the target feature information, if the matching is successful, sending the augmented reality content to the second user equipment, includes:
and when the license verification information is matched with the presentation license information, the preset feature information is matched based on the target feature information, and if the matching is successful, the augmented reality content is sent to the second user equipment.
10. The method of claim 8, wherein the method further comprises:
receiving presentation permission information about the augmented reality content transmitted by the first user equipment, wherein the presentation permission information comprises the residual number of presentations;
the step of matching the preset feature information based on the target feature information, if the matching is successful, sending the augmented reality content to the second user equipment, includes:
when the residual presentation times are greater than zero, matching the preset feature information based on the target feature information;
and if the matching is successful, sending the augmented reality content to the second user equipment, and updating the residual presentation times.
11. A method for presenting augmented reality content at a second user device side, wherein the method comprises:
extracting features based on the target image to obtain corresponding target feature information;
sending an augmented reality content request to a network device, wherein the augmented reality content request comprises the target feature information, and receiving augmented reality content returned by the network device, and when a preset feature image is not matched with the augmented reality content in size, the first user device adjusts the size of the augmented reality content corresponding to the preset feature image based on the size of the preset feature image, and sends the preset feature image and the augmented reality content with the adjusted size to the network device, wherein the augmented reality content with the adjusted size is used for superposition and presentation by other user devices;
And determining pose information of the second user equipment based on the target characteristic information, and superposing and presenting the augmented reality content based on the pose information.
12. A method for presenting augmented reality content at a second user device side, wherein the method comprises:
the method comprises the steps that an augmented reality content request is sent to network equipment, the augmented reality content and pose information about second user equipment are received, the augmented reality content request comprises a target image, when a preset feature image is not matched with the augmented reality content in size, the first user equipment adjusts the size of the augmented reality content corresponding to the preset feature image based on the size of the preset feature image, the preset feature image and the augmented reality content with the adjusted size are sent to the network equipment, and the augmented reality content with the adjusted size is used for being overlaid and presented by other user equipment;
and superposing and presenting the augmented reality content based on the pose information.
13. The method of claim 11 or 12, wherein the augmented reality content request further includes license verification information.
14. A method at a first user device for presenting augmented reality content, wherein the method comprises:
Transmitting augmented reality content to a network device, the augmented reality content comprising at least one target video;
and receiving a preset feature image returned by the network equipment, wherein the preset feature image comprises a feature frame in the target video, and when the preset feature image is not matched with the augmented reality content in size, the network equipment adjusts the size of the augmented reality content corresponding to the preset feature image based on the size of the preset feature image.
15. A method at a network device for presenting augmented reality content, wherein the method comprises:
receiving augmented reality content transmitted by first user equipment, wherein the augmented reality content comprises at least one target video;
determining a preset feature image corresponding to the augmented reality content based on a feature frame in the target video, wherein the preset feature image comprises the feature frame, and adjusting the size of the augmented reality content corresponding to the preset feature image based on the size of the preset feature image when the preset feature image is not matched with the augmented reality content in size;
and matching the preset characteristic information of the preset characteristic image based on the target characteristic information extracted from the target image, and if the matching is successful, sending the augmented reality content to the second user equipment.
16. A first user device for presenting augmented reality content, wherein the first user device comprises:
the size adjustment module is used for adjusting the size of the augmented reality content corresponding to the preset feature image based on the size of the preset feature image when the preset feature image is not matched with the augmented reality content in size;
the content sending module is used for sending a preset feature image and augmented reality content corresponding to the preset feature image to the network equipment, wherein the augmented reality content is used for being overlapped and presented by other user equipment, and the content sending module is used for: and sending the preset characteristic image and the size-adjusted augmented reality content to network equipment, wherein the size-adjusted augmented reality content is used for superposition and presentation of other user equipment.
17. The apparatus of claim 16, wherein the apparatus further comprises:
the image determining module is used for determining a preset characteristic image based on the characteristic frames in the target video;
the content sending module is used for:
transmitting the preset feature image and the augmented reality content corresponding to the preset feature image to network equipment, wherein the augmented reality content comprises the target video; the augmented reality content is for overlaid presentation by other user devices.
18. The device of claim 17, wherein the image determination module is to:
and determining a characteristic frame in the target video based on the characteristic frame designating operation of the user, and determining a preset characteristic image based on the characteristic frame.
19. The device of claim 17, wherein the image determination module is to:
and determining a characteristic frame in the target video, and determining a preset characteristic image based on the characteristic frame, wherein the characteristic frame meets a preset characteristic information condition.
20. A network device for presenting augmented reality content, wherein the network device comprises:
the content receiving module is used for receiving a preset feature image sent by the first user equipment and the augmented reality content corresponding to the preset feature image, wherein when the preset feature image is not matched with the augmented reality content in size, the first user equipment adjusts the size of the augmented reality content corresponding to the preset feature image based on the size of the preset feature image, the preset feature image and the augmented reality content with the adjusted size are sent to the network equipment, and the augmented reality content with the adjusted size is used for being overlapped and presented by other user equipment;
The feature extraction module is used for carrying out feature extraction on the preset feature image so as to obtain corresponding preset feature information;
and the content matching module is used for matching the preset characteristic information based on the target characteristic information extracted from the target image, and if the matching is successful, the augmented reality content is sent to the second user equipment.
21. The apparatus of claim 20, wherein the apparatus further comprises:
and the size adjusting module is used for adjusting the size of the augmented reality content based on the size of the preset characteristic image.
22. The device of claim 20, wherein the content matching module comprises:
a request receiving unit, configured to receive an augmented reality content request sent by a second user equipment, where the augmented reality content request includes target feature information, and the target feature information is extracted from a target image;
and the content matching unit is used for matching the preset characteristic information based on the target characteristic information, and if the matching is successful, the augmented reality content is sent to the second user equipment.
23. The apparatus of claim 22, wherein the apparatus further comprises:
a license receiving module, configured to receive presentation license information about the augmented reality content sent by the first user equipment;
The request receiving unit is configured to:
receiving an augmented reality content request sent by second user equipment, wherein the augmented reality content request comprises target feature information and license verification information, and the target feature information is extracted from a target image;
the content matching unit is used for:
and when the license verification information is matched with the presentation license information, the preset feature information is matched based on the target feature information, and if the matching is successful, the augmented reality content is sent to the second user equipment.
24. The apparatus of claim 22, wherein the apparatus further comprises:
a license receiving module, configured to receive presentation license information about the augmented reality content sent by the first user equipment, where the presentation license information includes a remaining number of presentations;
the content matching unit is used for:
when the residual presentation times are greater than zero, matching the preset feature information based on the target feature information;
and if the matching is successful, sending the augmented reality content to the second user equipment, and updating the residual presentation times.
25. A second user device for presenting augmented reality content, wherein the second user device comprises:
The target feature extraction module is used for carrying out feature extraction based on the target image so as to obtain corresponding target feature information;
a content acquisition module, configured to send an augmented reality content request to a network device, where the augmented reality content request includes the target feature information, and receive augmented reality content returned by the network device, where when a preset feature image is not matched with the augmented reality content in size, a first user device adjusts a size of the augmented reality content corresponding to the preset feature image based on a size of the preset feature image, sends the preset feature image and the augmented reality content with the adjusted size to the network device, and the augmented reality content with the adjusted size is used for being overlaid and presented by other user devices;
and the content presentation module is used for determining pose information of the second user equipment based on the target characteristic information and presenting the augmented reality content in a superposition mode based on the pose information.
26. A second user device for presenting augmented reality content, wherein the second user device comprises:
a content acquisition module, configured to send an augmented reality content request to a network device, and receive augmented reality content returned by the network device and pose information about the second user device, where the augmented reality content request includes a target image, and when a preset feature image is not matched with the augmented reality content in size, the first user device adjusts a size of the augmented reality content corresponding to the preset feature image based on the size of the preset feature image, sends the preset feature image and the augmented reality content with the adjusted size to the network device, and the augmented reality content with the adjusted size is used for superposition and presentation by other user devices;
And the content presentation module is used for superposing and presenting the augmented reality content based on the pose information.
27. The device of claim 25 or 26, wherein the augmented reality content request further comprises license verification information.
28. A first user device for presenting augmented reality content, wherein the first user device comprises:
a content sending module for sending augmented reality content to a network device, the augmented reality content comprising at least one target video;
the network device is used for receiving a preset feature image returned by the network device, wherein the preset feature image comprises a feature frame in the target video, and when the preset feature image is not matched with the augmented reality content in size, the network device adjusts the size of the augmented reality content corresponding to the preset feature image based on the size of the preset feature image.
29. A network device for presenting augmented reality content, wherein the network device comprises:
the content receiving module is used for receiving the augmented reality content sent by the first user equipment, wherein the augmented reality content comprises at least one target video;
The feature extraction module is used for determining a preset feature image corresponding to the augmented reality content based on a feature frame in the target video and extracting target feature information based on the preset feature image, wherein the preset feature image comprises the feature frame, and when the preset feature image is not matched with the augmented reality content in size, the size of the augmented reality content corresponding to the preset feature image is adjusted based on the size of the preset feature image;
and the content matching module is used for matching the preset characteristic information of the preset characteristic image based on the target characteristic information, and if the matching is successful, the augmented reality content is sent to the second user equipment.
30. A method for presenting augmented reality content, wherein the method comprises:
when the size of a preset feature image is not matched with that of the augmented reality content, the first user equipment adjusts the size of the augmented reality content corresponding to the preset feature image based on the size of the preset feature image; transmitting a preset feature image and augmented reality content corresponding to the preset feature image to a network device, wherein the transmitting the preset feature image and the augmented reality content corresponding to the preset feature image to the network device, the augmented reality content being used for superposition and presentation by other user devices, comprises: the preset characteristic image and the size-adjusted augmented reality content are sent to network equipment, and the size-adjusted augmented reality content is used for being overlapped and presented by other user equipment;
The network equipment performs feature extraction on the preset feature image to obtain corresponding preset feature information;
the network equipment matches the preset characteristic information based on the target characteristic information extracted from the target image, and if the matching is successful, the augmented reality content is sent to the second user equipment;
and the second user equipment determines pose information of the second user equipment based on the target characteristic information, and superimposes and presents the augmented reality content based on the pose information.
31. A method for presenting augmented reality content, wherein the method comprises:
the method comprises the steps that first user equipment sends augmented reality content to network equipment, wherein the augmented reality content comprises at least one target video;
the network device determines a preset feature image corresponding to the augmented reality content based on a feature frame in the target video, wherein the preset feature image comprises the feature frame, and adjusts the size of the augmented reality content corresponding to the preset feature image based on the size of the preset feature image when the preset feature image is not matched with the augmented reality content in size;
and the network equipment matches preset characteristic information of the preset characteristic image based on target characteristic information extracted from the target image, and if the matching is successful, the network equipment sends the augmented reality content to the second user equipment.
32. A system for presenting augmented reality content, wherein the system comprises a first user device according to any one of claims 16 to 19 or claim 28, a network device according to any one of claims 20 to 24 or claim 29, and a second user device according to any one of claims 25 to 27.
33. An apparatus for presenting augmented reality content, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform operations of the method according to any one of claims 1 to 15.
34. A computer readable medium comprising instructions that, when executed, cause a system to perform the operations of the method of any one of claims 1 to 15.
CN201811542730.9A 2018-08-28 2018-12-17 Method and device for presenting augmented reality content Active CN109636922B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810987965 2018-08-28
CN2018109879652 2018-08-28

Publications (2)

Publication Number Publication Date
CN109636922A CN109636922A (en) 2019-04-16
CN109636922B true CN109636922B (en) 2023-07-11

Family

ID=66074747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811542730.9A Active CN109636922B (en) 2018-08-28 2018-12-17 Method and device for presenting augmented reality content

Country Status (1)

Country Link
CN (1) CN109636922B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640166B (en) * 2020-06-08 2024-03-26 上海商汤智能科技有限公司 AR group photo method, device, computer equipment and storage medium
CN114710472A (en) * 2020-12-16 2022-07-05 中国移动通信有限公司研究院 AR video call processing method and device and communication equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102739872A (en) * 2012-07-13 2012-10-17 苏州梦想人软件科技有限公司 Mobile terminal, and augmented reality method used for mobile terminal
CN103001932A (en) * 2011-09-08 2013-03-27 北京智慧风云科技有限公司 Method and server for user authentication
CN105323252A (en) * 2015-11-16 2016-02-10 上海璟世数字科技有限公司 Method and system for realizing interaction based on augmented reality technology and terminal
CN106534072A (en) * 2016-10-13 2017-03-22 腾讯科技(深圳)有限公司 User information authorization method, apparatus, equipment and system
WO2017088777A1 (en) * 2015-11-27 2017-06-01 亮风台(上海)信息科技有限公司 Method, device and system for generating ar application and presenting ar instance
CN107221346A (en) * 2017-05-25 2017-09-29 亮风台(上海)信息科技有限公司 A kind of method and apparatus for the identification picture for being used to determine AR videos
CN108446026A (en) * 2018-03-26 2018-08-24 京东方科技集团股份有限公司 A kind of bootstrap technique, guiding equipment and a kind of medium based on augmented reality

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103001932A (en) * 2011-09-08 2013-03-27 北京智慧风云科技有限公司 Method and server for user authentication
CN102739872A (en) * 2012-07-13 2012-10-17 苏州梦想人软件科技有限公司 Mobile terminal, and augmented reality method used for mobile terminal
CN105323252A (en) * 2015-11-16 2016-02-10 上海璟世数字科技有限公司 Method and system for realizing interaction based on augmented reality technology and terminal
WO2017088777A1 (en) * 2015-11-27 2017-06-01 亮风台(上海)信息科技有限公司 Method, device and system for generating ar application and presenting ar instance
CN106534072A (en) * 2016-10-13 2017-03-22 腾讯科技(深圳)有限公司 User information authorization method, apparatus, equipment and system
CN107221346A (en) * 2017-05-25 2017-09-29 亮风台(上海)信息科技有限公司 A kind of method and apparatus for the identification picture for being used to determine AR videos
CN108446026A (en) * 2018-03-26 2018-08-24 京东方科技集团股份有限公司 A kind of bootstrap technique, guiding equipment and a kind of medium based on augmented reality

Also Published As

Publication number Publication date
CN109636922A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
EP3989089A1 (en) Face image transmission method and apparatus, numerical value transfer method and apparatus, and electronic device
EP4394554A1 (en) Method for determining and presenting target mark information and apparatus
CN105981368B (en) Picture composition and position guidance in an imaging device
US10102648B1 (en) Browser/web apps access to secure surface
JP2021520017A (en) Graphic code recognition method and device, as well as terminals and programs
CN109656363B (en) Method and equipment for setting enhanced interactive content
WO2021139382A1 (en) Face image processing method and apparatus, readable medium, and electronic device
CN103729120A (en) Method for generating thumbnail image and electronic device thereof
EP3448020B1 (en) Method and device for three-dimensional presentation of surveillance video
US20180253824A1 (en) Picture processing method and apparatus, and storage medium
CN109584377B (en) Method and device for presenting augmented reality content
US10504289B2 (en) Method and apparatus for securely displaying private information using an augmented reality headset
CN109636922B (en) Method and device for presenting augmented reality content
CN110780955A (en) Method and equipment for processing emoticon message
KR102164686B1 (en) Image processing method and apparatus of tile images
CN113965665A (en) Method and equipment for determining virtual live broadcast image
US20140022396A1 (en) Systems and Methods for Live View Photo Layer in Digital Imaging Applications
CN112818719A (en) Method and device for identifying two-dimensional code
CN109669541B (en) Method and equipment for configuring augmented reality content
US11734397B2 (en) Hallmark-based image capture prevention
CN110635995A (en) Method, device and system for realizing interaction between users
CN109931923B (en) Navigation guidance diagram generation method and device
CN110619615A (en) Method and apparatus for processing image
CN111796754B (en) Method and device for providing electronic books
CN114143568A (en) Method and equipment for determining augmented reality live image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder

Address after: 201210 7th Floor, No. 1, Lane 5005, Shenjiang Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee after: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 501 / 503-505, 570 shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Patentee before: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

CP02 Change in the address of a patent holder