CN112991555B - Data display method, device, equipment and storage medium - Google Patents

Data display method, device, equipment and storage medium Download PDF

Info

Publication number
CN112991555B
CN112991555B CN202110340475.5A CN202110340475A CN112991555B CN 112991555 B CN112991555 B CN 112991555B CN 202110340475 A CN202110340475 A CN 202110340475A CN 112991555 B CN112991555 B CN 112991555B
Authority
CN
China
Prior art keywords
preset
dimensional
target
special effect
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110340475.5A
Other languages
Chinese (zh)
Other versions
CN112991555A (en
Inventor
刘旭
栾青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202110340475.5A priority Critical patent/CN112991555B/en
Publication of CN112991555A publication Critical patent/CN112991555A/en
Priority to PCT/CN2021/102537 priority patent/WO2022205634A1/en
Application granted granted Critical
Publication of CN112991555B publication Critical patent/CN112991555B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a data presentation method, apparatus, device and storage medium, wherein the method comprises: acquiring a real scene image which is acquired by an augmented reality AR device and contains a plurality of objects; detecting a target object among a plurality of objects; the target object comprises a preset three-dimensional stereo object and a preset two-dimensional plane object which belong to a preset object library; determining a first AR special effect matched with the detected target object; and displaying the first AR special effect on an AR interaction interface of the AR device.

Description

Data display method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of augmented reality technologies, and in particular, to a data display method, apparatus, device, and storage medium.
Background
Augmented Reality (AR) technology superimposes entity information (visual information, sound, touch, etc.) on the real world after simulation, so that a real environment and a virtual object are presented on the same screen or space in real time. In recent years, AR devices have become more widely used, for example, AR literary products. The AR creative product is a product that scans a corresponding object through a scanning device and then displays virtual content corresponding to the object on the scanning device. For the existing AR creative product, a plane of a single object may be identified, and a virtual content of the plane of the single object is triggered to be displayed, and the virtual content of the object is generally a fixed virtual content such as an object introduction of the object. Therefore, the existing AR text creation product has single function and application scene and cannot meet the increasingly rich requirements of users.
Disclosure of Invention
The embodiment of the disclosure at least provides a data display method, a device, equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a data display method, including: acquiring a real scene image which is acquired by an augmented reality AR device and contains a plurality of objects; detecting a target object in the plurality of objects, wherein the target object comprises a preset three-dimensional stereo object and a preset two-dimensional plane object which belong to a preset object library; determining a first AR special effect matched with the detected target object; and displaying the matched first AR special effect on an AR interactive interface of the AR equipment.
In the embodiment of the disclosure, the processing mode of the preset three-dimensional object and the preset two-dimensional plane object is detected in the plurality of objects of the real scene image, and the matched first AR special effect is determined and displayed on the AR interaction interface through the combination relationship between the preset three-dimensional object and the preset two-dimensional plane object, so that the triggering condition of the AR special effect can be enriched, the AR interaction mode is increased, the interaction effect of AR text creation products is enlarged, and further the diversified user requirements of the user are met.
In an optional embodiment, the detecting a target object in the plurality of objects includes: in the case that it is detected that at least one initial two-dimensional planar object and at least one initial three-dimensional stereoscopic object are included in the plurality of objects, determining a preset two-dimensional planar object belonging to a preset object library in the at least one initial two-dimensional planar object, and determining a preset three-dimensional stereoscopic object belonging to the preset object library in the at least one initial three-dimensional stereoscopic object; and determining the determined preset two-dimensional plane object and the preset three-dimensional stereo object as the target object.
In the embodiment, the execution of the subsequent steps can be stopped in time under the condition that a plurality of objects do not simultaneously comprise a two-dimensional plane object and a three-dimensional stereo object, so that unnecessary memory consumption is saved, and the data display speed is increased; and under the condition that the multiple objects simultaneously comprise the preset two-dimensional plane object and the preset three-dimensional stereo object, triggering and displaying the corresponding AR special effect, so that the interactive effect of the AR text creation product can be enlarged, and the diversified user requirements of the user can be met.
In an alternative embodiment, the determining, in the at least one initial three-dimensional stereo object, a preset three-dimensional stereo object belonging to a preset object library includes: determining first object feature information of each of the initial three-dimensional stereo objects, wherein the first object feature information includes: the object feature points of the initial three-dimensional stereo object and the feature description information of the object feature points; determining preset object characteristic information matched with the first object characteristic information in preset object characteristic information of each preset three-dimensional stereo object contained in the preset object library; and determining the object corresponding to the matched preset object characteristic information as the preset three-dimensional object.
In the above embodiment, by comparing the object feature information of each initial three-dimensional object with the preset object feature information in the preset object library, the preset three-dimensional object is determined in the at least one three-dimensional object, and the object belonging to the preset object library can be accurately and quickly identified in the at least one initial three-dimensional object.
In an optional embodiment, before detecting the target object in the plurality of objects, the method further comprises: acquiring images of the preset three-dimensional object at a plurality of preset acquisition points according to a preset orientation through a camera device of the AR equipment to obtain a plurality of target images; extracting image feature information of each target image, wherein the image feature information comprises: feature points of the target image and feature description information of the feature points; and storing the image characteristic information of each target image as the object characteristic information of the preset three-dimensional stereo object in the preset object library.
In the above embodiment, the three-dimensional object to be entered is acquired at the plurality of preset acquisition points and according to the preset orientation, so that the preset object characteristic information of the three-dimensional object is determined and entered according to the plurality of acquired target images, the characteristic information of any one three-dimensional object can be more comprehensively entered, and the identification accuracy of the three-dimensional object in the plurality of objects is improved.
In an alternative embodiment, the determining, in the at least one initial two-dimensional planar object, a preset two-dimensional planar object belonging to a preset object library includes: extracting second object feature information of each initial two-dimensional plane object, wherein the second object feature information comprises: the object characteristic points of the initial two-dimensional plane object and the characteristic description information of the object characteristic points; determining at least one alternative two-dimensional plane object in the preset object library according to the second object characteristic information; and determining the preset two-dimensional plane object in the at least one initial two-dimensional plane object according to the similarity between each initial two-dimensional plane object and the corresponding at least one candidate two-dimensional plane object.
In the above embodiment, the candidate two-dimensional plane object is determined in a characteristic comparison manner, and then whether the initial two-dimensional plane is the preset two-dimensional plane object is determined in a similarity calculation manner, so that whether the initial two-dimensional plane is an object in the preset object library can be determined more accurately, and the robustness of the data display method is improved.
In an optional embodiment, the determining a first AR special effect matching the detected target object includes: determining object identification information of the target object; searching an AR special effect having an association relation with the object identification information in a target association file; the target association file is used for representing the corresponding relation between a preset object combination and the AR special effect; and determining the AR special effect with the incidence relation as the matched first AR special effect.
In the embodiment, different AR special effects can be triggered aiming at different target objects, so that the triggering conditions of the AR special effects are enriched, the AR interaction modes are increased, the interaction modes of AR literary creation products are expanded, and further the diversified user requirements of users are met.
In an optional implementation manner, the finding, in the target association file, the AR special effect having an association relationship with the object identification information includes: determining object identification information of a preset three-dimensional object in the target object based on the object identification information to obtain first object identification information; determining object identification information of a preset two-dimensional plane object in the target object based on the object identification information to obtain second object identification information; arranging and combining the first object identification information and the second object identification information to obtain a plurality of object combinations; and searching an AR special effect matched with one or more object combinations in the target association file to serve as the first AR special effect.
In the above embodiment, the first object identification information and the second object identification information are arranged and combined, and then the AR special effect matched with the first object identification information and the second object identification information is searched according to the determined combination of the plurality of objects to be displayed, so that the data processing amount can be reduced, the matching process of the AR special effect is simplified, and the matching efficiency of the AR special effect is improved.
In an optional implementation manner, the searching, in a target association file, for an AR special effect having an association relationship with the object identification information further includes: determining an alternative object combination containing the object identification information in the target associated file; determining whether a target object combination is contained in the candidate object combinations; the target object combination does not contain objects corresponding to other object identification information except the object identification information; if it is determined that the combination of target objects is included, and determining the AR special effect corresponding to the target object combination as the first AR special effect.
In the above embodiment, the target object combination is determined in the candidate object combination by determining the candidate object combination, so as to display the AR special effect corresponding to the target object combination, and the first AR special effect can be quickly found from the target association file, thereby improving the matching efficiency of the AR special effect and saving the matching time of the AR special effect.
In an optional embodiment, the method further comprises: in the case that the plurality of objects are detected to be changed, determining a target object with a change, wherein the change comprises at least one of the following objects: adding, deleting and replacing; determining a second AR special effect matched with the plurality of updated objects under the condition that the real scene image meets the display condition of the AR special effect according to the changed target object; and displaying the matched second AR special effect on an AR interactive interface of the AR equipment.
In the embodiment, the interaction between the AR equipment and the user can be realized, so that the special effect display mode of the AR equipment is enriched, and the interaction experience of the AR special effect is improved.
In an alternative embodiment, the first AR effect comprises a plurality of AR effects; the method further comprises the following steps: acquiring scene type information of the real scene image; the determining a first AR special effect that matches the detected target object includes: and determining a target AR special effect matched with the scene type information in the plurality of AR special effects, and displaying the target AR special effect on the AR interactive interface.
In the embodiment, the AR special effect matched with the corresponding application scene can be provided for the user for displaying on the premise of providing multiple application scenes for the user, and the diversified requirements of the user can be met compared with the single display mode of the AR special effect in the existing AR equipment.
In a second aspect, an embodiment of the present disclosure further provides a data display apparatus, including: the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a real scene image which is acquired by an augmented reality AR device and contains a plurality of objects; the detection unit is used for detecting a target object in the plurality of objects, wherein the target object comprises a preset three-dimensional stereo object and a preset two-dimensional plane object which belong to a preset object library; a determining unit configured to determine a first AR special effect matching the detected target object; and the display unit is used for displaying the matched first AR special effect on an AR interactive interface of the AR equipment.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating a data presentation method provided by an embodiment of the present disclosure;
FIG. 2 is a display diagram of a setting interface of an application scenario provided by an embodiment of the present disclosure;
FIG. 3 is a display diagram of a setting interface of another application scenario provided by an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating an AR special effect provided by an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a data presentation device provided by an embodiment of the present disclosure;
fig. 6 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a variety or any combination of at least two of a variety, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Research shows that, for the existing AR creative product, a single object can be identified and the virtual content of the single object is triggered to be displayed, and the virtual content of the object is usually fixed virtual content such as object introduction information of the object. Therefore, the existing AR text creation product has single function and application scene, and cannot meet the increasingly rich requirements of users.
Based on the above research, the present disclosure provides a data display method. In the embodiment of the disclosure, after the real scene image including the plurality of objects is acquired, the type detection may be performed on the plurality of objects, and then whether the plurality of objects satisfy the trigger condition of the AR special effect is determined according to the type detection result. And under the condition that the plurality of objects comprise a three-dimensional stereo object and a two-dimensional plane object which are determined according to the type detection result and belong to a preset object library, determining an AR special effect (namely a first AR special effect) matched with the detected three-dimensional stereo object and the detected two-dimensional plane object, and displaying the first AR special effect on an AR interaction interface of the AR equipment. The processing mode of the preset three-dimensional object and the preset two-dimensional plane object is detected in the multiple objects of the real scene image, and the matched first AR special effect is determined and displayed on the AR interactive interface through the combination relation between the preset three-dimensional object and the preset two-dimensional plane object, so that the triggering condition of the AR special effect can be enriched, the AR interactive mode is increased, the interactive effect of AR text creation products is enlarged, and further the diversified user requirements of a user are met.
In order to facilitate understanding of the present embodiment, a data presentation method disclosed in the embodiments of the present disclosure is first described in detail, and an execution subject of the data presentation method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability. In some possible implementations, the data presentation method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a data display method provided in the embodiment of the present disclosure is shown, where the method includes steps S101 to S105, where:
s101: and acquiring a real scene image which is acquired by the AR equipment and contains a plurality of objects.
In the embodiment of the present disclosure, the augmented reality AR apparatus is a computer apparatus having an AR function and a camera function. The AR equipment can acquire the real scene image in the area where the AR equipment is located through the built-in camera device.
For example, the augmented reality AR device is an intelligent mobile terminal, and at this time, a real scene image may be acquired by a camera device of the intelligent mobile terminal; for another example, if the augmented reality AR device is a smart watch, the image of the real scene may be captured by a camera of the smart watch.
S103: and detecting a target object in the plurality of objects, wherein the target object comprises a preset three-dimensional stereo object and a preset two-dimensional plane object which belong to a preset object library.
In the embodiment of the present disclosure, an object library (i.e., a preset object library) is preset, where the preset object library includes object feature information of a preset three-dimensional stereo object and object feature information of a preset two-dimensional plane object. Wherein the object feature information includes: feature points of the object and feature description information (i.e., descriptors) of the feature points.
Here, the feature description information of the feature point is an image feature for characterizing the feature point at a corresponding position in the corresponding image.
The two-dimensional planar object may be understood as a two-dimensional image, and in this case, the object feature information of the two-dimensional planar object is a feature point of an object included in the two-dimensional image and an image feature corresponding to the feature point in the two-dimensional image.
The object characteristic information of the three-dimensional stereoscopic object may be information describing an image characteristic of each image obtained after image acquisition of the three-dimensional stereoscopic object at a plurality of angles. For the image acquired at each angle, the object feature information is the feature points of the object included in the image and the image features corresponding to the feature points in the image.
In the embodiment of the present disclosure, after the real scene image is acquired, the type information of the multiple objects may be detected, and then it is determined whether the real scene image acquired at the current time meets the AR trigger condition according to the type information, where if it is determined that the AR trigger condition is met, step S105 is executed. It should be noted that the above type information is used to represent whether the object is a three-dimensional stereo object or a two-dimensional plane object.
Specifically, a three-dimensional solid object and a two-dimensional plane object may be detected from among a plurality of objects, and in a case where it is detected that the plurality of objects simultaneously include the three-dimensional solid object and the two-dimensional plane object, whether the three-dimensional solid object and the two-dimensional plane object are objects in a preset object library may be continuously determined. And if so, determining that the acquired real scene image meets the AR triggering condition.
It should be noted that, in the embodiment of the present disclosure, the target objects are three-dimensional stereo objects (i.e., preset three-dimensional stereo objects) and two-dimensional plane objects (i.e., preset two-dimensional plane objects) belonging to a preset object library, which are detected from among a plurality of objects.
S105: determining a first AR special effect matched with the detected target object; and displaying the matched first AR special effect on an AR interactive interface of the AR equipment.
In the embodiment of the present disclosure, after it is determined that the AR trigger condition is satisfied, a first AR special effect matched with the target object may be determined, where the first AR special effect may be multiple or one, and the first AR special effect is displayed on an AR interaction interface of the AR device.
Specifically, as shown in fig. 4, the stereo human figure 1 represents a three-dimensional stereo object, the card 2 represents a two-dimensional plane image, and the semi-transparent rectangle 3 represents a first AR special effect displayed after triggering.
In the embodiment of the present disclosure, after a real scene image including a plurality of objects is acquired, type detection may be performed on the plurality of objects, and it is determined whether the plurality of objects satisfy a trigger condition of an AR special effect according to a detection result of the type. And under the condition that the plurality of objects comprise a three-dimensional stereo object and a two-dimensional plane object which are determined according to the type detection result and belong to a preset object library, determining an AR special effect (namely a first AR special effect) matched with the detected three-dimensional stereo object and the detected two-dimensional plane object, and displaying the first AR special effect on an AR interaction interface of the AR equipment. The processing mode of the preset three-dimensional object and the preset two-dimensional plane object is detected in the multiple objects of the real scene image, and the matched first AR special effect is determined and displayed on the AR interactive interface through the combination relation between the preset three-dimensional object and the preset two-dimensional plane object, so that the triggering condition of the AR special effect can be enriched, the AR interactive mode is increased, the interactive effect of AR text creation products is enlarged, and further the diversified user requirements of a user are met.
It should be noted that, in the embodiment of the present disclosure, the application scenarios that may be obtained in steps S101 to S105 described above may be a game scenario, a social scenario, a cultural scenario, and an educational scenario. Different application scenes correspond to different preset object libraries, and different preset object libraries comprise different objects.
The above steps S101 to S105 may be applied to a client, where the client may be installed on a computer device supporting the AR function, and the present disclosure does not specifically limit the type of the computer device, so as to be implemented.
In an alternative embodiment, when the user installs the client on the computer device, the application scenario of the client may be set, for example, as shown in fig. 2, a plurality of application scenarios may be preset, that is: game scenes, social scenes, cultural scenes, educational scenes. The user can determine the application scenario of the client by clicking the corresponding button.
In another alternative embodiment, the user may not set the application scenario of the client when installing the client on the computer device. After entering the client, a corresponding button in the main page of the client can be clicked to select the corresponding application scene. For example, as shown in fig. 3, a click game scene may be selected, and at this time, corresponding game scenes 1 to N may be displayed on the right side of the client main page. For example, after the user clicks the game scene 1, the user may acquire the image of the real scene and enter the display page of the image of the real scene.
After setting an application scene according to the above-described manner, first acquiring a real scene image containing a plurality of objects acquired by an Augmented Reality (AR) device; and detecting a plurality of objects in the image of the real scene to detect a target object among the plurality of objects.
In an optional embodiment, step S103, detecting a target object in the plurality of objects, includes the following steps:
a step S1031, in a case that it is detected that at least one initial two-dimensional plane object and at least one initial three-dimensional stereo object are included in the plurality of objects, of determining a preset two-dimensional plane object belonging to a preset object library among the at least one initial two-dimensional plane object, and of determining a preset three-dimensional stereo object belonging to the preset object library among the at least one initial three-dimensional stereo object;
step S1032, determining the determined preset two-dimensional planar object and the determined preset three-dimensional stereoscopic object as the target object.
In the embodiment of the present disclosure, first, a three-dimensional stereoscopic object may be detected among a plurality of objects by a three-dimensional model detection algorithm, and a two-dimensional planar object may be detected among a plurality of objects by a two-dimensional planar detection algorithm. It is assumed that a two-dimensional planar object (i.e., the above-described at least one initial two-dimensional planar object) and a three-dimensional stereoscopic object (i.e., the above-described at least one initial three-dimensional stereoscopic object) are detected among the plurality of objects. At this time, whether the real scene image satisfies the AR trigger condition may be determined according to the at least one initial two-dimensional planar object and the at least one initial three-dimensional stereoscopic object. Specifically, it may be determined whether a preset two-dimensional plane object belonging to a preset object library is included in the at least one initial two-dimensional plane object, and it may be determined whether a preset three-dimensional stereo object belonging to the preset object library is included in the at least one initial three-dimensional stereo object.
If the situation that the preset two-dimensional plane object and the preset three-dimensional stereo object are contained is determined, the fact that the image of the real scene meets the AR triggering condition is determined, and at the moment, the determined preset two-dimensional plane object and the determined preset three-dimensional stereo object can be determined as the target object.
In the embodiment of the present disclosure, if it is detected that the plurality of objects do not include at least one initial two-dimensional planar object and at least one initial three-dimensional stereoscopic object at the same time, it is determined that the real scene image is an invalid image and does not satisfy the AR trigger condition. At this time, prompt information can be displayed on the display interface of the AR device, and the real scene image is prompted to be an invalid image through the prompt information, thereby prompting the user to reacquire the real scene image.
In the embodiment of the present disclosure, by the above processing method, the execution of the subsequent steps can be stopped in time when the plurality of objects do not include the two-dimensional planar object and the three-dimensional stereoscopic object at the same time, thereby saving unnecessary memory consumption and speeding up data display.
In the embodiment of the present disclosure, when a preset three-dimensional stereoscopic object belonging to a preset object library is determined in at least one initial three-dimensional stereoscopic object, first object feature information of each initial three-dimensional stereoscopic object may be first determined, where the first object feature information includes: the method comprises the steps of starting object characteristic points of a three-dimensional stereo object and feature description information of the object characteristic points.
For example, for each initial three-dimensional stereo object, an image region including the initial three-dimensional stereo object may be determined in an image of a real scene, and further, an image located in the image region is processed through a preset target network model, and after the processing, at least one object feature point of the initial three-dimensional stereo object is obtained, and a descriptor (that is, feature description information) of each object feature point is determined.
And then, determining preset object characteristic information matched with the first object characteristic information in the preset object characteristic information of each preset three-dimensional stereo object contained in the preset object library.
After the first object feature information is obtained, the first object feature information may be input into a Bag of words model (Bag of words) for retrieval, so as to obtain a retrieval result, where the retrieval result is used to represent whether preset object feature information matching the first object feature information is included in a preset object library.
And under the condition that the preset object characteristic information matched with the first object characteristic information is determined to be contained in the preset object library according to the search result, determining the object corresponding to the matched preset object characteristic information as a preset three-dimensional stereo object.
As can be seen from the above description, in the embodiment of the present disclosure, by comparing the object feature information of each initial three-dimensional object with the preset object feature information in the preset object library, a manner of determining the preset three-dimensional object in the at least one three-dimensional object can accurately and quickly identify the object belonging to the preset object library in the at least one initial three-dimensional object, and the processing manner can reduce the amount of data operations and speed up the data processing under the condition of satisfying the data processing requirement.
In this embodiment of the disclosure, when determining a preset two-dimensional planar object belonging to a preset object library in the at least one initial two-dimensional planar object, second object feature information of each initial two-dimensional planar object may be extracted first, where the second object feature information includes: and the object characteristic points of the initial two-dimensional plane object and the characteristic description information of the object characteristic points.
Specifically, each initial two-dimensional plane object can be processed through a feature pyramid network to obtain a multi-scale image; then, the object feature points of the image of each scale are extracted, and feature description information of the object feature points is determined. At this time, the object feature point corresponding to the image of each scale and the feature description information thereof may be determined as the second object feature information.
In this disclosure, the object feature point may be an ORB feature point
Figure BDA0002999368670000131
Namely, the characteristic points of the object are detected by using a FAST (Features from accessed Segment Test) algorithm; at this point, the ORB feature point->
Figure BDA0002999368670000132
Can be a descriptor->
Figure BDA0002999368670000133
After the second object feature information of each initial two-dimensional plane object is extracted, at least one candidate two-dimensional plane object can be determined in a preset object library according to the second object feature information.
Specifically, a feature distance between feature description information (e.g., a descriptor) in the second object feature information and feature description information (e.g., a descriptor) in preset object feature information contained in a preset object library may be calculated. And under the condition that the characteristic distance is smaller than or equal to a preset threshold value, determining that the object corresponding to the preset object characteristic information is a candidate two-dimensional plane object.
After the at least one candidate two-dimensional plane object is determined, a preset two-dimensional plane object included in the target object can be determined in the at least one initial two-dimensional plane object according to the similarity between each initial two-dimensional plane object and the corresponding at least one candidate two-dimensional plane object.
Specifically, according to the knowledge of three-dimensional vision, an image can be regarded as a plane, and the homography transformation process of the plane between the images can be described by a homography matrix H. The homography transformation may be understood as it describes a positional mapping relationship of an object between a world coordinate system and a pixel coordinate system.
In the embodiment of the present disclosure, in order to determine the preset two-dimensional planar object in the candidate two-dimensional planar objects, a RANSAC random consensus sampling algorithm may be used to calculate the second object feature information and an H matrix of the object feature information of each candidate two-dimensional planar object, respectively. And finally, matching the H matrix by a template matching method, so as to judge the similarity between the alternative two-dimensional plane object and the initial two-dimensional plane object according to the matching result. One way to calculate the similarity is to calculate the similarity by calculating the sum of the squares of the differences of each pixel value between the two images.
For any one initial two-dimensional planar object, a similarity value between each candidate two-dimensional planar object corresponding to the initial two-dimensional planar object determined by the initial two-dimensional planar object may be calculated. And if the similarity value contains the alternative two-dimensional plane object which is greater than or equal to the preset similarity threshold value, determining that the initial two-dimensional plane is a preset two-dimensional plane object in a preset object library.
In the embodiment of the disclosure, the candidate two-dimensional plane object is determined according to the characteristic comparison mode, and then whether the initial two-dimensional plane is the preset two-dimensional plane object is determined according to the similarity calculation mode, so that whether the initial two-dimensional plane is an object in the preset object library can be more accurately determined, and the robustness of the data display method is improved.
In the embodiment of the present disclosure, before detecting a target object in a plurality of objects, it is further required to enter corresponding preset object feature information into a preset object library, for example, enter preset object feature information of a preset three-dimensional stereo object, and enter preset object feature information of a preset two-dimensional stereo object.
1. The process of inputting the preset object characteristic information of the preset two-dimensional stereo object is described as follows:
and for the two-dimensional plane object to be recorded, the two-dimensional plane object is called a registration image T. The registration image T can be processed through the feature pyramid network to obtain a multi-scale image. Then, the object feature points of the image of each scale are extracted, and feature description information of the object feature points is determined. At this time, the object feature point and the feature description information thereof corresponding to the image of each scale can be determined as the preset object feature information of the registered image, and stored in a preset object library.
In this disclosure, the object feature point may be an ORB feature point
Figure BDA0002999368670000141
Namely, the characteristic points of the object are detected by adopting a FAST (features from acquired segment test) algorithm; at this time, the ORB feature point>
Figure BDA0002999368670000151
Can be a descriptor->
Figure BDA0002999368670000152
2. The process of inputting the preset object characteristic information of the preset three-dimensional object is described as follows:
(1) Acquiring images of the preset three-dimensional object at a plurality of preset acquisition points according to a preset direction through a camera device of the AR equipment to obtain a plurality of target images;
(2) And extracting image characteristic information of each target image, wherein the image characteristic information comprises: feature points of the target image and feature description information of the feature points;
(3) And storing the image characteristic information of each target image in the preset object library as the object characteristic information of the preset three-dimensional stereo object.
And aiming at the three-dimensional object to be recorded, the three-dimensional object is called a registered three-dimensional model P. First, a points are sampled at equal distances on a circle on a horizontal plane on which the three-dimensional model P is registered. The horizontal plane is a plane where a rectangular coordinate system XY is located, which is constructed by using the center of the registered three-dimensional model P as an origin. And the determined sampling points a and the central connecting line of the registered three-dimensional model P form a rays. For each ray, the algorithm samples b points equidistantly within 50cm from the center of the registered three-dimensional model P. At this time, the determined sampling points b are the preset acquisition points, and the preset orientation is that the lens orientation of the camera device points to the center of the registered three-dimensional model P. At this time, the registered three-dimensional model P may be acquired at a plurality of preset acquisition points and according to a preset orientation, so as to obtain a plurality of target images.
Next, image feature information of each target image can be extracted, that is: and the characteristic points and the characteristic description information of the target image. Finally, the image feature information can be stored in a preset object library.
As can be seen from the above description, in the embodiment of the present disclosure, the three-dimensional object to be entered is acquired at the plurality of preset acquisition points and according to the preset orientation, so that the preset object feature information of the three-dimensional object is determined according to the plurality of acquired target images and is entered, the feature information of any one three-dimensional object can be more comprehensively entered, and the identification accuracy of the three-dimensional object in the plurality of objects is improved.
It should be noted that the corresponding preset object feature information entered in the preset object library may be object feature information of a default object preset by the client, or may be object feature information of a designated object entered by the user through the client in addition to the default object.
In the embodiment of the present disclosure, after the target object is detected in the above-described manner, the determining a first AR special effect matched with the detected target object may specifically include:
step S1051, determining object identification information of the target object;
step S1052, searching an AR special effect having an association relation with the object identification information in a target association file; the target association file is used for representing the corresponding relation between a preset object combination and the AR special effect;
step S1053, determining the AR special effect having the association relationship as the first matching AR special effect.
In the embodiment of the present disclosure, a target association file is preset, and the target association file includes a corresponding relationship between an object combination and an AR special effect, where the object combination may be the following combination: a combination of a preset three-dimensional solid object and a preset two-dimensional plane object. For each combination, special effect identification information of the corresponding AR special effect is preset.
After the target objects are determined, the object identification information of each target object can be determined, and further, the AR special effect having the association relation with the object identification information is searched in the target association file. And if the AR special effect with the association relation is found, determining the AR special effect as a first AR special effect. And if the AR special effect with the association relation is not found, displaying prompt information of failure finding on the AR interactive interface.
In this embodiment of the present disclosure, finding the AR special effect having an association relationship with the object identification information in the target association file may be described as the following process:
the first method is as follows:
determining object identification information of a preset three-dimensional object in a target object based on the object identification information to obtain first object identification information; determining object identification information of a preset two-dimensional plane object in the target object based on the object identification information to obtain second object identification information; arranging and combining the first object identification information and the second object identification information to obtain a plurality of object combinations; and searching the AR special effect matched with one or more object combinations in the target association file to serve as a first AR special effect.
Specifically, the object identification information of the two-dimensional planar object may be denoted as M1 (i.e., the second object identification information described above) and the object identification information of the three-dimensional solid object may be denoted as M2 (i.e., the first object identification information described above) in the object identification information. And carrying out random permutation and combination on the object identification information mark M1 and the object identification information mark M2 to obtain a plurality of object combinations, and searching an AR special effect matched with one or more object combinations in the target association file to be used as a first AR special effect.
The second method comprises the following steps:
for any object identification information, firstly, an alternative object combination containing the object identification information is determined in the target association file, and then, whether the alternative object combination contains a target object combination is determined, wherein the target object combination does not contain other object identification information except the object identification information. And if the target object combination is determined to be contained, determining the AR special effect corresponding to the target object combination as the first AR special effect.
As can be seen from the above description, in the embodiment of the present disclosure, different AR special effects can be triggered for different target objects through the above processing manner, so that triggering conditions of the AR special effects are enriched, AR interaction manners are increased, application scenarios of AR creative products are expanded, and further diversified user requirements of users are met.
In the embodiment of the present disclosure, after the matched first AR special effect is displayed on the AR interactive interface of the AR device, the target object that is changed may be determined when the change of the plurality of objects is detected; wherein the alteration comprises at least one of: adding, deleting and replacing.
For example, a three-dimensional solid object in the target object is kept unchanged, and a two-dimensional plane object in the target object is changed. For another example, a three-dimensional solid object in the target object is changed while a two-dimensional planar object in the target object is kept unchanged. For another example, a three-dimensional solid object and a two-dimensional plane object in the target object are simultaneously changed.
Determining a second AR special effect matched with the plurality of updated objects under the condition that the real scene image meets the display condition of the AR special effect according to the changed target object; and displaying the matched second AR special effect on an AR interactive interface of the AR equipment.
In one possible implementation, it may be determined whether the target object after the change includes a three-dimensional stereoscopic object and a two-dimensional planar object at the same time, and if so, it is determined that the image of the real scene satisfies a display condition of the AR special effect.
Specifically, the object identification information of the unchanged target object and the object identification information of the changed target object may be determined respectively, the AR special effect matched with the object identification information may be searched for in the target association file, and the searched AR special effect may be determined as the second AR special effect.
As can be seen from the above description, in the embodiment of the present disclosure, through the above processing manner, the interactive interaction between the AR device and the user can be realized, so that the special effect display manner of the AR device is enriched, and the interactive experience of the AR special effect is improved.
In an alternative embodiment, scene type information of the real scene image may also be obtained. In particular, the scene type information may be acquired prior to acquiring the image of the real scene.
If the first AR special effect includes multiple AR special effects, when the first AR special effect matching the detected target object is determined, a target AR special effect matching the scene type information may be determined among the multiple AR special effects, and the target AR special effect may be displayed on the AR interactive interface.
Specifically, as shown in fig. 2 and 3, when the user sets the application scene in the manner shown in fig. 2 and 3, the scene type information of the image of the real scene, for example, a game scene, may be determined, at this time, an AR special effect corresponding to the game scene may be determined as a target AR special effect among a plurality of AR special effects, and the target AR special effect may be displayed on the AR interactive interface.
In the embodiment of the disclosure, by the processing method, the AR special effect matched with the corresponding application scene can be provided for the user for displaying on the premise of providing multiple application scenes for the user, and the diversified requirements of the user can be met compared with a single display method of the AR special effect in the existing AR device.
The above process will be described with reference to specific application scenarios.
1. The information is kept secret.
User a sends an AR postcard to user B. At this point, user a may upload his own recorded video or speech into the postcard and post the postcard to user B. After the user B obtains the postcard, a real scene image containing the postcard and the user B can be collected through computer equipment, wherein the postcard and the user B are multiple objects, the postcard is a two-dimensional plane object, and the user B is a three-dimensional stereo object.
According to the process, the multiple objects in the real scene image are analyzed, the real scene image meets the AR special effect triggering condition, at the moment, the content in the postcard is displayed on the AR interactive interface in an AR special effect mode, and therefore it is guaranteed that privacy information of a user is not leaked.
2. And (5) carrying out interactive puzzle solving.
The user presets a corresponding three-dimensional object and a two-dimensional plane object, and sets riddles corresponding to various combinations of the three-dimensional object and the two-dimensional plane object, wherein the combinations of the three-dimensional object and the two-dimensional plane object can be combined into various riddles.
The user can find a combination which can be matched with each other by trying a plurality of combinations of three-dimensional solid objects and two-dimensional plane objects to solve the corresponding puzzle.
3. The lovers interact.
The user presets a three-dimensional object as a ring and a two-dimensional plane object as a wedding photo.
When a plurality of objects contained in the real scene image are a ring and a wedding photo, the real scene image is determined to satisfy the AR trigger condition, and at this time, an AR special effect corresponding to the ring and the wedding photo, for example, a corresponding AR video is determined. And displaying the AR image on the AR interaction interface.
The processing method increases the ceremonial feeling of the lovers seeing the AR images on one hand, and has a secret effect on the private information of the users on the other hand.
4. And (4) social contact of multiple persons.
In a multi-person social scenario, a boy takes a solid object and a girl takes a planar object. At this time, a real scene image including the stereoscopic object and the planar object may be acquired by the augmented reality AR apparatus.
After the real scene image is processed in the manner described above, if it is determined that the real scene image meets the AR trigger condition, the corresponding AR special effect is triggered and displayed. If the AR triggering condition is not met, corresponding prompt information can be generated to prompt the user to replace the three-dimensional object and/or the planar object until the AR triggering condition is met.
Through the processing mode, not only can the interactive interaction between the user and the AR equipment be realized, but also the interactive interaction between the users can be increased, so that the game pleasure is increased for the users.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, a data display device corresponding to the data display method is also provided in the embodiments of the present disclosure, and as the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the data display method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 5, a schematic diagram of a data display apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: the device comprises an acquisition unit 51, a detection unit 52, a determination unit 53 and a display unit 54; wherein,
an acquiring unit 51, configured to acquire a real scene image including a plurality of objects acquired by an augmented reality AR device;
a detecting unit 52, configured to detect a target object among the plurality of objects, where the target object includes a preset three-dimensional stereo object and a preset two-dimensional plane object belonging to a preset object library;
a determining unit 53, configured to determine a first AR special effect matching the detected target object;
and the display unit 54 is configured to display the matched first AR special effect on an AR interactive interface of the AR device.
In the embodiment of the disclosure, the processing mode of the preset three-dimensional object and the preset two-dimensional plane object is detected in the plurality of objects of the real scene image, and the matched first AR special effect is determined and displayed on the AR interaction interface through the combination relationship between the preset three-dimensional object and the preset two-dimensional plane object, so that the triggering condition of the AR special effect can be enriched, the AR interaction mode is increased, the interaction effect of AR text creation products is enlarged, and further the diversified user requirements of the user are met.
In one possible embodiment, the detection unit 52 is configured to: in the case that it is detected that at least one initial two-dimensional planar object and at least one initial three-dimensional stereoscopic object are included in the plurality of objects, determining a preset two-dimensional planar object belonging to a preset object library in the at least one initial two-dimensional planar object, and determining a preset three-dimensional stereoscopic object belonging to the preset object library in the at least one initial three-dimensional stereoscopic object; and determining the determined preset two-dimensional plane object and the preset three-dimensional stereo object as the target object.
In a possible embodiment, the detecting unit 52 is further configured to: determining first object feature information of each of the initial three-dimensional stereo objects, wherein the first object feature information comprises: the object feature points of the initial three-dimensional stereo object and the feature description information of the object feature points; determining preset object characteristic information matched with the first object characteristic information in preset object characteristic information of each preset three-dimensional stereo object contained in the preset object library; and determining the object corresponding to the matched preset object characteristic information as the preset three-dimensional object.
In a possible embodiment, the apparatus is further configured to: before a target object is detected in the plurality of objects, acquiring images of the preset three-dimensional object at a plurality of preset acquisition points according to a preset orientation through a camera device of the AR equipment to obtain a plurality of target images; extracting image feature information of each target image, wherein the image feature information comprises: feature points of the target image and feature description information of the feature points; and storing the image characteristic information of each target image as the object characteristic information of the preset three-dimensional stereo object in the preset object library.
In a possible embodiment, the detecting unit 52 is further configured to: extracting second object feature information of each initial two-dimensional plane object, wherein the second object feature information comprises: the object characteristic points of the initial two-dimensional plane object and the characteristic description information of the object characteristic points; determining at least one alternative two-dimensional plane object in the preset object library according to the second object characteristic information; and determining the preset two-dimensional plane object in the at least one initial two-dimensional plane object according to the similarity between each initial two-dimensional plane object and the corresponding at least one candidate two-dimensional plane object.
In a possible implementation, the determining unit 53 is further configured to: determining object identification information of the target object; searching an AR special effect having an association relation with the object identification information in a target association file; the target association file is used for representing the corresponding relation between a preset object combination and the AR special effect; and determining the AR special effect with the incidence relation as the matched first AR special effect.
In a possible implementation, the determining unit 53 is further configured to: determining object identification information of a preset three-dimensional object in the target object based on the object identification information to obtain first object identification information; determining object identification information of a preset two-dimensional plane object in the target object based on the object identification information to obtain second object identification information; arranging and combining the first object identification information and the second object identification information to obtain a plurality of object combinations; and searching an AR special effect matched with one or more object combinations in the target association file to serve as the first AR special effect.
In a possible implementation, the determining unit 53 is further configured to: determining an alternative object combination containing the object identification information in the target associated file; determining whether a target object combination is contained in the candidate object combinations; the target object combination does not contain objects corresponding to other object identification information except the object identification information; and if the target object combination is determined to be contained, determining the AR special effect corresponding to the target object combination as the first AR special effect.
In a possible embodiment, the apparatus is further configured to: in the case that the plurality of objects are detected to be changed, determining a target object with a change, wherein the change comprises at least one of the following objects: adding, deleting and replacing; determining a second AR special effect matched with the plurality of updated objects under the condition that the real scene image meets the display condition of the AR special effect according to the changed target object; and displaying the matched second AR special effect on an AR interactive interface of the AR equipment.
In a possible embodiment, the apparatus is further configured to: acquiring scene type information of the real scene image; a determination unit further configured to: and under the condition that the first AR special effect comprises a plurality of AR special effects, determining a target AR special effect matched with the scene type information in the plurality of AR special effects, and displaying the target AR special effect on the AR interactive interface.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Corresponding to the data display method in fig. 1, an embodiment of the present disclosure further provides a computer device 600, as shown in fig. 6, which is a schematic structural diagram of an electronic device 600 provided in an embodiment of the present disclosure, and includes:
a processor 61, a memory 62, and a bus 63; the memory 62 is used for storing execution instructions and includes a memory 621 and an external memory 622; the memory 621 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 61 and data exchanged with an external memory 622 such as a hard disk, the processor 61 exchanges data with the external memory 622 through the memory 621, and when the electronic device 600 operates, the processor 61 communicates with the memory 62 through the bus 63, so that the processor 61 executes the following instructions:
acquiring a real scene image which is acquired by an augmented reality AR device and contains a plurality of objects; detecting a target object in the plurality of objects, wherein the target object comprises a preset three-dimensional stereo object and a preset two-dimensional plane object which belong to a preset object library; determining a first AR special effect matched with the detected target object; and displaying the matched first AR special effect on an AR interactive interface of the AR equipment.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the data presentation method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the data display method in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in software functional units and sold or used as a stand-alone product, may be stored in a non-transitory computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. A method for displaying data, comprising:
acquiring a real scene image which is acquired by an augmented reality AR device and contains a plurality of objects;
detecting a target object in the plurality of objects under the condition that the plurality of objects simultaneously comprise a three-dimensional stereo object and a two-dimensional plane object, wherein the target object comprises a preset three-dimensional stereo object and a preset two-dimensional plane object which belong to a preset object library;
determining a first AR special effect matched with the detected target object; displaying the matched first AR special effect on an AR interaction interface of the AR equipment;
wherein the determining a first AR special effect that matches the detected target object comprises:
determining object identification information of a preset three-dimensional object in the target object to obtain first object identification information; determining object identification information of a preset two-dimensional plane object in the target object to obtain second object identification information;
arranging and combining the first object identification information and the second object identification information to obtain a plurality of object combinations;
searching an AR special effect matched with one or more object combinations in the target association file as the first AR special effect; the target association file is used for representing the corresponding relation between a preset object combination and the AR special effect;
and determining the AR special effects with the incidence relation as the matched first AR special effect.
2. The method of claim 1, wherein the detecting a target object among the plurality of objects comprises:
in the case that it is detected that at least one initial two-dimensional planar object and at least one initial three-dimensional stereoscopic object are included in the plurality of objects, determining a preset two-dimensional planar object belonging to a preset object library in the at least one initial two-dimensional planar object, and determining a preset three-dimensional stereoscopic object belonging to the preset object library in the at least one initial three-dimensional stereoscopic object;
and determining the determined preset two-dimensional plane object and the preset three-dimensional stereo object as the target object.
3. The method according to claim 2, wherein the determining of the at least one initial three-dimensional stereo object from among the at least one initial three-dimensional stereo object, which belongs to a preset object library, comprises:
determining first object feature information of each of the initial three-dimensional stereo objects, wherein the first object feature information includes: the object feature points of the initial three-dimensional stereo object and the feature description information of the object feature points;
determining preset object characteristic information matched with the first object characteristic information in preset object characteristic information of each preset three-dimensional stereo object contained in the preset object library;
and determining the object corresponding to the matched preset object characteristic information as the preset three-dimensional object.
4. The method according to any one of claims 1 to 3, further comprising:
acquiring images of the preset three-dimensional object at a plurality of preset acquisition points according to a preset orientation through a camera device of the AR equipment to obtain a plurality of target images;
extracting image feature information of each target image, wherein the image feature information comprises: feature points of the target image and feature description information of the feature points;
and storing the image characteristic information of each target image in the preset object library as the object characteristic information of the preset three-dimensional stereo object.
5. The method according to claim 2, wherein the determining of the preset two-dimensional planar object belonging to the preset object library in the at least one initial two-dimensional planar object comprises:
extracting second object characteristic information of each initial two-dimensional plane object, wherein the second object characteristic information comprises: the object characteristic points of the initial two-dimensional plane object and the characteristic description information of the object characteristic points;
determining at least one alternative two-dimensional plane object in the preset object library according to the second object characteristic information;
and determining the preset two-dimensional plane object in the at least one initial two-dimensional plane object according to the similarity between each initial two-dimensional plane object and the corresponding at least one candidate two-dimensional plane object.
6. The method according to claim 1, wherein finding the AR special effect having an association relation with the object identification information in a target association file further comprises:
determining an alternative object combination containing the object identification information in the target associated file;
determining whether a target object combination is contained in the candidate object combinations; the target object combination does not contain objects corresponding to other object identification information except the object identification information;
and if the target object combination is determined to be contained, determining the AR special effect corresponding to the target object combination as the first AR special effect.
7. The method of claim 1, further comprising:
in the case that the plurality of objects are detected to be changed, determining a target object with a change, wherein the change comprises at least one of the following objects: adding, deleting and replacing;
determining a second AR special effect matched with the plurality of updated objects under the condition that the real scene image meets the display condition of the AR special effect according to the changed target object; and displaying the matched second AR special effect on an AR interactive interface of the AR equipment.
8. The method of claim 1, wherein the first AR effect comprises a plurality of AR effects;
the method further comprises the following steps: acquiring scene type information of the real scene image;
the determining a first AR special effect that matches the detected target object includes: and determining a target AR special effect matched with the scene type information in the plurality of AR special effects, and displaying the target AR special effect on the AR interactive interface.
9. A data presentation device, comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a real scene image which is acquired by an augmented reality AR device and contains a plurality of objects;
a detection unit, configured to detect a target object among the plurality of objects in a case where it is detected that a three-dimensional stereoscopic object and a two-dimensional planar object are simultaneously included in the plurality of objects, where the target object includes a preset three-dimensional stereoscopic object and a preset two-dimensional planar object that belong to a preset object library;
a determining unit configured to determine a first AR special effect matching the detected target object;
the display unit is used for displaying the matched first AR special effect on an AR interaction interface of the AR equipment; wherein the determining a first AR special effect that matches the detected target object comprises: determining object identification information of a preset three-dimensional object in the target object to obtain first object identification information; determining object identification information of a preset two-dimensional plane object in the target object to obtain second object identification information; arranging and combining the first object identification information and the second object identification information to obtain a plurality of object combinations; searching an AR special effect matched with one or more object combinations in the target association file as the first AR special effect; the target association file is used for representing the corresponding relation between a preset object combination and the AR special effect; and determining the AR special effects with the incidence relation as the matched first AR special effect.
10. A computer device, comprising: processor, memory and bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the steps of the data presentation method according to any one of claims 1 to 8.
11. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the data presentation method according to any one of claims 1 to 8.
CN202110340475.5A 2021-03-30 2021-03-30 Data display method, device, equipment and storage medium Active CN112991555B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110340475.5A CN112991555B (en) 2021-03-30 2021-03-30 Data display method, device, equipment and storage medium
PCT/CN2021/102537 WO2022205634A1 (en) 2021-03-30 2021-06-25 Data display method and apparatus, and device, storage medium and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110340475.5A CN112991555B (en) 2021-03-30 2021-03-30 Data display method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112991555A CN112991555A (en) 2021-06-18
CN112991555B true CN112991555B (en) 2023-04-07

Family

ID=76338342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110340475.5A Active CN112991555B (en) 2021-03-30 2021-03-30 Data display method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112991555B (en)
WO (1) WO2022205634A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991555B (en) * 2021-03-30 2023-04-07 北京市商汤科技开发有限公司 Data display method, device, equipment and storage medium
CN117576355B (en) * 2024-01-17 2024-04-19 南昌大藏科技有限公司 AR-based text-created product display method and display equipment
CN118131905A (en) * 2024-02-02 2024-06-04 东莞市三奕电子科技股份有限公司 Augmented reality display method, system, equipment and storage medium for AR glasses

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130278633A1 (en) * 2012-04-20 2013-10-24 Samsung Electronics Co., Ltd. Method and system for generating augmented reality scene
CN109213728A (en) * 2017-06-29 2019-01-15 深圳市掌网科技股份有限公司 Cultural relic exhibition method and system based on augmented reality
US20210089639A1 (en) * 2018-01-30 2021-03-25 Onevisage Sa Method and system for 3d graphical authentication on electronic devices
US10665028B2 (en) * 2018-06-04 2020-05-26 Facebook, Inc. Mobile persistent augmented-reality experiences
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN111626183B (en) * 2020-05-25 2024-07-16 深圳市商汤科技有限公司 Target object display method and device, electronic equipment and storage medium
CN112148197A (en) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 Augmented reality AR interaction method and device, electronic equipment and storage medium
CN112348969B (en) * 2020-11-06 2023-04-25 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium
CN112991555B (en) * 2021-03-30 2023-04-07 北京市商汤科技开发有限公司 Data display method, device, equipment and storage medium

Also Published As

Publication number Publication date
WO2022205634A1 (en) 2022-10-06
CN112991555A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN112991555B (en) Data display method, device, equipment and storage medium
US11842514B1 (en) Determining a pose of an object from rgb-d images
CN112148197A (en) Augmented reality AR interaction method and device, electronic equipment and storage medium
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
KR20130099317A (en) System for implementing interactive augmented reality and method for the same
JP5963609B2 (en) Image processing apparatus and image processing method
CN111209897B (en) Video processing method, device and storage medium
US9224064B2 (en) Electronic device, electronic device operating method, and computer readable recording medium recording the method
WO2017084319A1 (en) Gesture recognition method and virtual reality display output device
CN111638797A (en) Display control method and device
US20190197133A1 (en) Shape-based graphics search
CN112653848B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
US20160148343A1 (en) Information Processing Method and Electronic Device
US20140232748A1 (en) Device, method and computer readable recording medium for operating the same
US11334621B2 (en) Image search system, image search method and storage medium
CN112637665B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN112905014A (en) Interaction method and device in AR scene, electronic equipment and storage medium
US11568631B2 (en) Method, system, and non-transitory computer readable record medium for extracting and providing text color and background color in image
CN112348968A (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN111640193A (en) Word processing method, word processing device, computer equipment and storage medium
CN112882576A (en) AR interaction method and device, electronic equipment and storage medium
CN111640169A (en) Historical event presenting method and device, electronic equipment and storage medium
CN116954367A (en) Virtual reality interaction method, system and equipment
CN112333498A (en) Display control method and device, computer equipment and storage medium
CN111638794A (en) Display control method and device for virtual cultural relics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40045368

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant