CN114332426A - Data display method, device, equipment and medium in augmented reality scene - Google Patents

Data display method, device, equipment and medium in augmented reality scene Download PDF

Info

Publication number
CN114332426A
CN114332426A CN202111658810.2A CN202111658810A CN114332426A CN 114332426 A CN114332426 A CN 114332426A CN 202111658810 A CN202111658810 A CN 202111658810A CN 114332426 A CN114332426 A CN 114332426A
Authority
CN
China
Prior art keywords
identifier
entity
special effect
information
conference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111658810.2A
Other languages
Chinese (zh)
Inventor
李斌
欧华富
李颖楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Mianbaitang Intelligent Technology Co ltd
Original Assignee
Beijing Mianbaitang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Mianbaitang Intelligent Technology Co ltd filed Critical Beijing Mianbaitang Intelligent Technology Co ltd
Priority to CN202111658810.2A priority Critical patent/CN114332426A/en
Publication of CN114332426A publication Critical patent/CN114332426A/en
Withdrawn legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides a data display method, apparatus, device and medium in an augmented reality scene, wherein the method comprises: acquiring a real scene image acquired by augmented reality AR equipment; identifying entity identification in the real scene image, and determining a related entity matched with the entity identification; wherein the entity identification comprises a meeting room identification and/or a work card identification; determining a first AR special effect containing entity detail information of the associated entity, and displaying the first AR special effect on a display interface of the AR device.

Description

Data display method, device, equipment and medium in augmented reality scene
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a medium for displaying data in an augmented reality scene.
Background
In the prior art, corresponding office information is generally acquired through a sign set in an office scene. For example, the information of the corresponding staff is obtained through the staff card, and the use state of the conference room is indicated through a conference indication board at the door of the conference room. However, the existing method for acquiring office information is single, and when the office information is updated, the problem that the user cannot acquire the latest office information due to untimely update of the indication board is easy to occur. And the display mode of the existing office information is monotonous, and the diversified use requirements of users cannot be met.
Disclosure of Invention
The embodiment of the disclosure at least provides a data display method, device, equipment and medium in an augmented reality scene.
In a first aspect, an embodiment of the present disclosure provides a data display method in an augmented reality scene, where the method includes: acquiring a real scene image acquired by augmented reality AR equipment; identifying entity identification in the real scene image, and determining a related entity matched with the entity identification; wherein the entity identification comprises a meeting room identification and/or a work card identification; determining a first AR special effect containing entity detail information of the associated entity, and displaying the first AR special effect on a display interface of the AR device.
In the embodiment of the disclosure, by recognizing an entity identifier (for example, a meeting room identifier and/or a work card identifier) in a real scene image, an associated entity matching the entity identifier is further determined; then, the entity detail information of the associated entity in a real scene can be quickly and simply acquired by displaying the first AR special effect containing the entity detail information of the associated entity in the AR equipment, so that the working efficiency of the staff can be improved; meanwhile, the interestingness of a real scene can be enhanced.
In an optional embodiment, the entity detail information includes: meeting associated information of a meeting room to which the meeting room identifier belongs, and/or object information of a corresponding object of the workcard identifier; the conference associated information includes: meeting state information and/or object information for each participant in the meeting room.
In the embodiment, the entity detail information containing the content is displayed on the display interface of the AR device, so that more detailed related information of the associated entity can be displayed, thereby meeting various requirements of the user, for example, meeting query requirements and employee information query requirements, and further widening application scenarios.
In an optional implementation manner, the entity detail information further includes an interaction identifier; after the presentation interface of the AR device presents the first AR special effect, the method further comprises: and responding to the trigger operation aiming at the interaction identifier in the first AR special effect, determining interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR equipment.
In the embodiment, the interaction identifier is added in the entity detail information, so that the interaction operation of the first AR special effect can be supported, the operation of viewing the information associated with the display content in the first AR special effect by the user is simplified, and the time cost of the user is saved. Meanwhile, the diversity and the interestingness of the data display method are enhanced, and the user experience is further improved.
In an optional embodiment, the interactive identifier is an avatar identifier of the target object; the determining, in response to the trigger operation for the interaction identifier in the first AR special effect, interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR device includes: in response to the triggering operation aiming at the head portrait identification in the first AR special effect, determining object introduction information of a target object corresponding to the target head portrait identification selected and triggered by the user; wherein the target object comprises: the work card mark belongs to an object, or the conference room mark belongs to a conference object in a conference room; and displaying a second AR special effect containing the object introduction information on a display interface of the AR equipment.
In the above embodiment, the interactive identifier is set as the avatar identifier of the target object, the avatar identifier is triggered to obtain the object introduction information corresponding to the target object, and the second AR special effect containing the object introduction information is displayed, so that the user can check the object introduction information of the target object through simple triggering operation, the information obtaining mode of the user is simpler and faster, and the information obtaining efficiency of the user is further improved.
In an optional embodiment, the interaction identifier is a communication identifier of the target object; the determining, in response to the trigger operation for the interaction identifier in the first AR special effect, interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR device includes: in response to the triggering operation aiming at the communication identifier in the first AR special effect, determining the identifier content of the communication identifier, and determining a communication page associated with a target communication identifier selected and triggered by a user; and controlling a display interface of the AR device to jump from the first AR special effect to a communication page containing the identification content.
In the above embodiment, by setting the interaction identifier as the communication identifier of the target object and jumping to the communication page of the target communication identifier by triggering the target communication identifier, the user can directly send a message to the target object based on the communication page, thereby simplifying the contact path of the user contacting the target object and saving the operation cost and time cost of the user.
In an optional embodiment, the interactive identifier includes a location identifier of each participant in the conference room; the determining, in response to the trigger operation for the interaction identifier in the first AR special effect, interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR device includes: responding to a trigger operation aiming at a position identifier in the first AR special effect, and determining a target position identifier corresponding to the trigger operation; acquiring object introduction information of a target participant corresponding to the target position identification; and displaying a third AR special effect containing the object introduction information of the corresponding target participant object on a display interface of the AR equipment.
In the embodiment, the interactive identification is set as the position identification of each conference object in the conference room, the position identification in the first AR special effect is triggered, and the third AR special effect containing the object introduction information is displayed on the display interface of the AR equipment, so that the user can know the object information of the conference object corresponding to each seat in detail based on the seat in the conference room, the operation of the user is simplified, and the viewing efficiency of the user is improved.
In an optional embodiment, the interaction identifier includes a conference live identifier; the determining, in response to the trigger operation for the interaction identifier in the first AR special effect, interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR device includes: responding to the trigger operation aiming at the conference live broadcast identification in the first AR special effect, and acquiring a real-time live broadcast picture of the conference room; and displaying a fourth AR special effect containing the real-time live broadcast picture on a display interface of the AR equipment.
In the above embodiment, the interactive identifier is set as the live conference identifier, so that a real-time live broadcast picture corresponding to the live conference identifier can be acquired by triggering the trigger operation of the live conference identifier, and a fourth AR special effect containing the real-time live broadcast picture is displayed in a display interface of the AR device. By the processing mode, the user can acquire the live broadcast picture of the conference in real time through the conference live broadcast identification, so that the mode that the user acquires the related information of the conference is simplified, and the channel for the user to acquire the related information of the conference is widened.
In an optional embodiment, the determining a first AR special effect containing entity detail information of the associated entity includes: determining entity attributes of the associated entities; determining a target AR template matched with the entity attribute from an AR template library based on the entity attribute; determining the first AR special effect based on the target AR template and the entity detail information.
In the embodiment, the target AR template matched with the entity attribute is determined based on the entity attribute of the associated entity, and the first AR special effect is determined based on the target AR template and the entity detail information, so that the first AR special effect more fit with the associated entity can be determined, the comfort level of the user for checking the first AR special effect can be improved, and the checking experience of the user can be improved.
In an optional embodiment, the entity detail information further includes a meeting room appointment identifier; the method further comprises the following steps: responding to a trigger operation aiming at a meeting room appointment mark in the first AR special effect, determining meeting time to be appointed, and displaying the meeting time to be appointed in the first AR special effect; the meeting time to be reserved is a time period which is not reserved in the meeting room corresponding to the meeting room identification; receiving a conference reservation operation of a user on a target conference time in the conference time to be reserved; and generating conference reservation information based on the conference reservation operation, and displaying reservation success information in the first AR special effect.
In the above embodiment, the conference time to be reserved may be determined in response to a trigger operation for the meeting room reservation identifier in the first AR special effect, and the conference time to be reserved may be displayed in the first AR special effect. The conference reservation operation of the user on the conference time to be reserved is received, the conference reservation information is generated, the reservation success information is displayed in the first AR special effect, the user can reserve a conference room through simple operation, the operation of the user can be simplified, and meanwhile, the working efficiency of staff can be improved.
In a second aspect, an embodiment of the present disclosure further provides a data display device in an augmented reality scene, including: the acquiring unit is used for acquiring a real scene image acquired by the AR equipment; the identification unit is used for identifying the entity identification in the real scene image and determining the associated entity matched with the entity identification; wherein the entity identification comprises a meeting room identification and/or a work card identification; and the display unit is used for determining a first AR special effect containing the entity detail information of the associated entity and displaying the first AR special effect on a display interface of the AR equipment.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of a data presentation method in an augmented reality scene according to an embodiment of the present disclosure;
fig. 2(a) is a schematic diagram illustrating an effect of a meeting room identifier provided by an embodiment of the present disclosure;
FIG. 2(b) is a schematic diagram illustrating the effect of a work card sign provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating an effect of interaction identifiers including location identifiers of respective conference objects in a conference room according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating an effect of another interactive identifier provided by the embodiment of the present disclosure including location identifiers of various conference objects in a conference room;
fig. 5 is a schematic diagram illustrating a data presentation apparatus in an augmented reality scene according to an embodiment of the present disclosure;
fig. 6 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
It has been found that, in the prior art, corresponding office information is generally obtained through a sign set in an office scene. For example, the information of the corresponding staff is obtained through the staff card, and the use state of the conference room is indicated through a conference indication board at the door of the conference room. However, the existing method for acquiring office information is single, and when the office information is updated, the problem that the user cannot acquire the latest office information due to untimely update of the indication board is easy to occur. And the display mode of the existing office information is monotonous, and the diversified use requirements of users cannot be met.
Based on the above research, the present disclosure provides a data display method, apparatus, device, and medium in an augmented reality scene. In the embodiment of the disclosure, by recognizing an entity identifier (for example, a meeting room identifier and/or a work card identifier) in a real scene image, an associated entity matching the entity identifier is further determined; then, the entity detail information of the associated entity in a real scene can be quickly and simply acquired by displaying the first AR special effect containing the entity detail information of the associated entity in the AR equipment, so that the working efficiency of the staff can be improved; meanwhile, the interestingness of a real scene can be enhanced.
To facilitate understanding of the embodiment, first, a data display method in an augmented reality scene disclosed in the embodiment of the present disclosure is described in detail, and an execution subject of the data display method in the augmented reality scene provided in the embodiment of the present disclosure is generally a computer device with a certain computing power.
Referring to fig. 1, a flowchart of a data display method in an augmented reality scene provided in an embodiment of the present disclosure is shown, where the method includes steps S101 to S105, where:
s101: and acquiring a real scene image acquired by the AR equipment.
Here, the augmented reality AR device is used to indicate a device having an AR function, and for example, the augmented reality AR device may be a mobile terminal device such as a smartphone or a tablet computer, or may be a wearable device such as AR glasses. The augmented reality AR device is not specifically limited by the present disclosure to meet the actual need.
In the embodiment of the disclosure, the real scene can be collected through the augmented reality AR device, and a real scene image corresponding to the real scene is obtained. For example, when the augmented reality AR device is a mobile phone, a real scene may be photographed by a camera of the mobile phone, so as to obtain a real scene image corresponding to the real scene.
S103: identifying entity identification in the real scene image, and determining a related entity matched with the entity identification; wherein the entity identification comprises a meeting room identification and/or a work card identification.
Here, the entity identification is a pre-made mark (i.e., marker). For example, the entity identifier may be a two-dimensional code or a graphic that is manufactured in advance, and in addition, the entity identifier may also be other types of identifiers, and the display form of the entity identifier is not specifically limited in this disclosure, so as to be implemented.
In embodiments of the present disclosure, the entity identification includes a meeting room identification and/or a work card identification. Here, each entity identification may correspond to a matching associated entity. For example, as shown in fig. 2(a), in the case that the entity identifier is a conference room identifier, the entity identifier may correspond to a matching conference room. As shown in fig. 2(b), in the case that the entity identifier is a work card identifier, the entity identifier may correspond to a matching work card. The type of the associated entity matched with the entity identifier is not particularly limited by the disclosure, so as to meet the actual requirement.
For example, there are 3 entity identities, entity identity 1, entity identity 2, entity identity 3, respectively. Under the condition that the entity identifier 1 and the entity identifier 2 are meeting room identifiers, and the entity identifier 3 is a work card identifier, the associated entity matched with the entity identifier 1 may be the meeting room 1, the associated entity matched with the entity identifier 2 may be the meeting room 2, and the associated entity matched with the entity identifier 3 may be the work card 1.
In the embodiment of the present disclosure, the conference room corresponding to the conference room identifier may be a conference room of a company, a conference room of a school, a conference room of a hospital, a conference room of a hotel, or the like.
In the embodiment of the disclosure, the work card corresponding to the work card identifier may be a work card in a work station of an employee, and in addition, the work card identifier may also be a work card of an auxiliary office device in a corresponding scene, for example, a work card of an auxiliary robot. The scene of the workcard is not particularly limited by the disclosure so as to meet the actual requirement.
In the embodiment of the present disclosure, an entity identifier in an image of a real scene may be recognized by an augmented reality AR device to determine an associated entity matching the entity identifier. For example, the augmented reality AR device is a mobile phone, and for the entity identifier 1, it is assumed that the associated entity matched with the entity identifier 1 is a conference room 1. At this time, the two-dimensional code corresponding to the entity identifier 1 may be identified by the mobile phone, and then it may be determined that the associated entity matched with the entity identifier 1 is the conference room 1.
S105: determining a first AR special effect containing entity detail information of the associated entity, and displaying the first AR special effect on a display interface of the AR device.
In the embodiment of the present disclosure, after determining the associated entity matching the entity identification, entity detail information of the associated entity may be determined.
Here, the entity detail information of the associated entity is used to indicate detailed information related to the associated entity. For example, when the associated entity is determined to be a conference room, the entity detail information of the conference room is related to the conference room. For example, information related to a conference held in a conference room, information related to a participant of the conference, and the like. And in the case that the associated entity is determined to be the work card, the entity detail information of the work card is the related information of the work card. For example, the related information of the object corresponding to the work card, and the related information of the company to which the object corresponding to the work card belongs. The entity detail information of the associated entity is not particularly limited in the present disclosure, so as to meet the actual requirement.
In an embodiment of the present disclosure, a first AR special effect containing entity detail information of a related entity may be determined, and the first AR special effect may be presented in a presentation interface of an AR device.
Here, the first AR special effect is used to combine the entity detail information of the associated entity with the virtual information and then present the associated entity.
For example, in case the associated entity is a conference room, the first AR special effect containing entity detail information of the conference room may contain the following information: the three-dimensional effect graph of the conference room, the identification information on each seat in the conference room, the reservation information of the conference room, the conference subject of the conference held by the conference room, the name and the contact information of the unit or department reserving the conference room, and the like.
As can be seen from the above description, by recognizing an entity identifier (e.g., a meeting room identifier and/or a work card identifier) in an image of a real scene, an associated entity matching the entity identifier is further determined; then, the entity detail information of the associated entity in a real scene can be quickly and simply acquired by displaying the first AR special effect containing the entity detail information of the associated entity in the AR equipment, so that the working efficiency of the staff can be improved; meanwhile, the interestingness of a real scene can be enhanced.
The steps in S101 to S105 described above will be described in detail in two scenarios.
Scene one: a meeting scenario.
In the case that the conference scene corresponding to the scene one is a company conference scene, a corresponding two-dimensional code may be set for each conference room in a company, and as shown in fig. 2(a), the two-dimensional code may be set at a door of the corresponding conference room. At this time, the staff of the company can obtain the image of the area where the two-dimensional code is located through the mobile phone, and identify the two-dimensional code in the image, so as to obtain the detail information (i.e. the entity detail information) of the meeting room corresponding to the two-dimensional code. At this time, the AR special effect (that is, the first AR special effect) including the detail information may be determined according to the detail information, and the AR special effect including the detail information may be displayed in a display interface of the mobile phone.
Scene two: an office scene.
In an office scene corresponding to the scene two, a corresponding two-dimensional code can be set in the work card at each worker station. At this time, the image of the area where the two-dimensional code in the work card is located can be obtained through the mobile phone, and the two-dimensional code in the image is identified, so that the detail information (namely, the entity detail information) of the employee corresponding to the two-dimensional code is obtained. At this time, the AR special effect (that is, the first AR special effect) including the detail information may be determined according to the detail information, and the AR special effect including the detail information may be displayed in a display interface of the mobile phone.
As can be seen from the above description, the entity identifier includes a meeting room identifier and/or a work card identifier. In the case that the entity identifier includes a meeting room identifier, the entity detail information of the associated entity includes: the meeting room identification belongs to the meeting associated information of the meeting room.
Here, the conference associated information includes: meeting state information and/or object information for each participant in the meeting room.
In an embodiment of the present disclosure, the conference state information may include at least one of: meeting time, meeting subject, meeting progress status. The content of the conference related information is not specifically limited in the present disclosure, so as to meet the actual need.
Here, the meeting time is used to indicate a start time and an end time of the meeting corresponding to the meeting room.
In the embodiment of the present disclosure, one conference or a plurality of conferences corresponding to the conference room may be provided. When there are a plurality of conferences corresponding to the conference room, the plurality of conferences included in the conference room may be presented in a list form in the presentation interface of the AR device. The present disclosure does not specifically limit the presentation form of the plurality of conferences, so as to be implemented. In addition, the number of conferences corresponding to each conference room is not particularly limited in the present disclosure, so as to meet the actual needs.
Here, the conference theme may be an annual summary, a project progress, a meta-denier evening, and the like. The present disclosure does not specifically limit the conference subject to meet the actual requirement.
Here, the conference progress state may be one of: not started, ongoing, and finished. The conference progress state is not particularly limited by the present disclosure, so as to meet the actual needs.
In an embodiment of the present disclosure, the object information of each participant object may include at least one of: meeting location, meeting topic, meeting state, personal profile. The content of the object information of each participant object is not particularly limited in the present disclosure, so as to meet the actual requirement.
Here, the conference position is used to indicate seat information in the conference room of each participant participating in the conference. The conference topic is used for indicating the topic of the speaking content of each participant in the conference room. The conference-in status is used to indicate the status of each participant in the conference, for example, the conference-in status of a participant may be one of the following: past meeting and not past meeting. The present disclosure does not specifically limit the content of the above-mentioned meeting state to meet the actual requirement.
Here, the personal profile may contain at least one of: names of the participating objects, contact information of the participating objects and affiliated companies of the participating objects. The present disclosure does not specifically limit what is included in the above-described personal profiles to meet the actual needs.
In the case that the entity identifier includes a card identifier, the entity detail information of the associated entity includes: the work card identifies object information of the corresponding object. The object information of the work card for identifying the corresponding object comprises at least one of the following items: personal introduction, job title, contact details.
Here, the personal introduction may include at least one of: name, gender, work experience, educational conditions, etc. The present disclosure is not intended to be limited to the details of the individual presentation, to the extent necessary to meet the needs of the application.
Here, the position is used to indicate position information of a company to which the employee corresponding to the employee card belongs, and for example, the position may be a staff member, a group leader, a manager, or the like. The content of the job is not particularly limited in the present disclosure, and the content of the job actually included in the company is taken as a standard.
Here, the contact means is used to indicate a means by which the employee corresponding to the employee card can be contacted. For example, the contact information may be a telephone number, a mailbox number, or the like, and the disclosure does not specifically limit the contact information to meet the actual needs.
In the embodiment, the entity detail information containing the content is displayed on the display interface of the AR device, so that more detailed related information of the associated entity can be displayed, thereby meeting various requirements of the user, for example, meeting query requirements and employee information query requirements, and further widening application scenarios.
The above steps will be described in detail with reference to specific embodiments.
In the embodiment of the disclosure, firstly, a real scene image may be collected through an AR device, and then, an entity identifier in the real scene image may be identified, so as to determine an associated entity matching the entity identifier; thereafter, a first AR special effect containing entity detail information of the associated entity may be determined.
In an optional embodiment, the step S105: determining a first AR special effect containing entity detail information of a related entity, specifically comprising the steps of:
step S21: determining entity attributes of the associated entities;
step S22: determining a target AR template matched with the entity attribute from an AR template library based on the entity attribute;
step S23: determining the first AR special effect based on the target AR template and the entity detail information.
In an embodiment of the present disclosure, the entity attribute of the associated entity may include at least one of: an entity type of the associated entity, a topic of the associated entity (e.g., a topic of a meeting held by a meeting room), a presentation time of the associated entity, a location of the associated entity. Besides, the entity attribute of the associated entity may also be other attribute information, which is not specifically limited in this disclosure.
Here, the entity type of the associated entity may be a conference room type, a workcard type. For example, if an enterprise includes multiple meeting rooms, each meeting room may be named, such as "spring meeting room," summer meeting room, "" fall meeting room, "and" winter meeting room. At this time, the conference room type can be determined based on the name of each conference room. In addition, the conference room type may be determined based on the size and size of the conference room. The workers in the one-time competition may include cleaning personnel, security personnel, referee personnel and competition personnel, and at the moment, the worker card type of the worker card of the worker can be determined according to the work type of the workers.
Here, the topic of the associated entity may be a topic of a conference held by a conference room, for example: the conference system comprises a service expansion conference, a showing up conference, a celebration and Dandan conference, a spring festival meeting and the like.
Here, the presentation time of the associated entity may be understood as a time for identifying the entity identifier of the associated entity, for example, the identified time is "dueleven", "mid-autumn", "spring festival", or "applicant's festival".
Here, the location of the associated entity may be understood as a location where an entity identity of the associated entity is identified. For example, a certain conference room of a certain company, for example, a certain conference room of a certain hotel.
In the embodiment of the present disclosure, after the entity attribute of the associated entity is determined, a target AR template matching the entity attribute may be determined from an AR template library based on the entity attribute, and then the first AR special effect is determined based on the target AR template and the entity detail information.
In specific implementation, it can be known from the above description that the entity attribute may correspond to multiple dimensions, and in this case, the target AR template matching the entity attribute may be determined based on the following described manner:
the first method is as follows:
and searching an AR template matched with the entity attributes of the multiple dimensions in the AR template library. And if the matched AR template is found, determining the matched AR template as the target AR template.
The second method comprises the following steps:
the template elements matching the entity attributes for each dimension are looked up in the AR template library. And combining the searched template elements to obtain the target AR template.
The third method comprises the following steps:
searching an AR template matched with the entity attribute of a part of dimensions in the entity attributes of the multiple dimensions in an AR template library, and searching template elements matched with the entity attributes of the rest dimensions; and combining the matched AR template and template elements to obtain the target AR template.
The following describes how the target AR template is determined based on the entity attributes of each dimension.
(1) In the case that the entity attribute of the associated entity is the entity type of the associated entity, a target AR template matching the entity type of the associated entity may be determined in the AR template library based on the entity type of the associated entity.
For example, where the entity type of the associated entity is a conference room, a first AR template (or template element) matching the conference room may be selected in the library of AR templates. Meanwhile, a second AR template (or template element) matching the conference room may be determined from the first AR template (or template elements) based on the size of the conference room (e.g., 8 conference rooms) and the layout in the conference room (e.g., the layout in the conference room is table plus chair, or only chair in the layout in the conference room), and a target AR template may be determined based on the second AR template (or template element).
For another example, where the entity type of the associated entity is a work card, a third AR template (or template element) that matches the work card may be selected in the AR template library. Meanwhile, a fourth AR template (or template element) matching the work card may be determined from the third AR template (or template element) based on the object information that the work card needs to be displayed (for example, the object information that needs to be displayed includes a photo of the object, or the object information that needs to be displayed does not include a photo of the object), and a target AR template may be determined based on the fourth AR template (or template element).
(2) In the case where the entity attribute of the associated entity is the subject of the associated entity, a target AR template matching the subject of the associated entity may be determined in the AR template library based on the subject of the associated entity.
Here, the topic of the associated entity may be a work topic (e.g., annual reports, quarterly reports, project progress, etc.) or a holiday topic (e.g., a day of the year, a day of the week, etc.). At this time, an AR template (or template element) matching the subject of the associated entity may be determined in the AR template library based on the subject of the associated entity, and a target AR template may be determined based on the matching AR template (or template element).
(3) In the case that the entity attribute of the associated entity is the presentation time of the associated entity, a target AR template matching the presentation time of the associated entity may be determined in the AR template library based on the presentation time of the associated entity.
In this embodiment of the present disclosure, the presentation time of the associated entity is used to indicate the recognition time of the AR device when recognizing the entity identifier in the real scene image. At this time, a target AR template matching the recognition time may be determined in the AR template library based on the recognition time.
For example, when the recognition time is afternoon, an AR template (or template element) containing a dusk scene may be determined from the AR template library, and a target AR template may be determined based on the matching AR template (or template element). For another example, in the case that the identification time is weekend, an AR template (or template element) containing weekend happiness may be determined from the library of AR templates, and a target AR template may be determined based on the matching AR template (or template element).
In the disclosed embodiment, after the target template is determined, the first AR special effect may be determined based on the target template and entity detail information of the associated entity.
In the embodiment, the target AR template matched with the entity attribute is determined based on the entity attribute of the associated entity, and the first AR special effect is determined based on the target AR template and the entity detail information, so that the first AR special effect more fit with the associated entity can be determined, the comfort level of the user for checking the first AR special effect can be improved, and the checking experience of the user can be improved.
In an optional implementation manner, in a case that the entity detail information includes the interactive identifier, after the presentation interface of the AR device presents the first AR special effect, the method of the present disclosure further includes the following steps:
step S31: and responding to the trigger operation aiming at the interaction identifier in the first AR special effect, determining interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR equipment.
In the embodiment of the present disclosure, in a case that the entity detail information includes the interaction identifier, the user may trigger the interaction identifier in the first AR special effect after the AR device shows the first AR special effect. At this time, the interaction information corresponding to the interaction identifier may be determined in response to a trigger operation of the user for the interaction identifier.
In the embodiment of the present disclosure, the number of the interaction identifiers in the first AR special effect may be one, or may be multiple. The number of the interactive identifications is not particularly limited in the present disclosure, so as to meet the actual requirement.
Here, the interaction identification may correspond to a plurality of interaction dimensions, each interaction dimension being used to indicate a different type of interaction content. At this point, the interactive identification of each dimension may be presented in the first AR special effect.
In the embodiment of the present disclosure, the triggering operation for the interactive identifier may be a clicking operation, a long-time pressing operation, or a sliding operation of a user for a target interactive identifier in the interactive identifier, or may also be a voice interactive operation of the user for the target interactive identifier in the interactive identifier. The triggering operation for the interactive identifier is not specifically limited in the present disclosure, and can be implemented as standard.
In the embodiment of the present disclosure, after the interaction information corresponding to the interaction identifier is determined, the interaction information may be displayed on a display interface of the AR device. For example, the interaction information may be presented in a presentation interface of the AR device, or a jump interface may be generated based on the interaction information and the interaction information may be presented in an interface after the jump. The way of displaying the interactive information is not specifically limited in the present disclosure, so as to meet the actual need.
Here, if the interaction identifier is multiple and different interaction identifiers correspond to different interaction dimensions, after a trigger operation of the user on the interaction identifier is detected, an identifier type of the interaction identifier triggered by the user may be determined, and then corresponding interaction information may be determined according to the identifier type.
In the embodiment, the interaction identifier is added in the entity detail information, so that the interaction operation of the first AR special effect can be supported, the operation of viewing the information associated with the display content in the first AR special effect by the user is simplified, and the time cost of the user is saved. Meanwhile, the diversity and the interestingness of the data display method are enhanced, and the user experience is further improved.
The above interaction process will be described with reference to specific interaction identifiers.
The first condition is as follows: the interactive identification is the head portrait identification of the target object.
In this case, for step S31, in response to the trigger operation for the interaction identifier in the first AR special effect, determining the interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR device, specifically including the following steps:
step S41: in response to the triggering operation aiming at the head portrait identification in the first AR special effect, determining object introduction information of a target object corresponding to the target head portrait identification selected and triggered by the user; wherein the target object comprises: the work card mark belongs to an object, or the conference room mark belongs to a conference object in a conference room;
step S42: and displaying a second AR special effect containing the object introduction information on a display interface of the AR equipment.
In this embodiment of the present disclosure, in a case that the entity identifier is a meeting room identifier, a first AR special effect including entity detail information of the meeting room may be displayed on a display interface of an AR device.
In the case where the interaction identifier included in the entity detail information is an avatar identifier of a target object, then the target object may be a participant object of the conference room, and the avatar identifier may be an avatar identifier of the participant object.
In specific implementation, a first AR special effect including a layout structure of a conference table in the conference room can be displayed on a display interface of the AR device. At each location of the conference table, the avatar identification of each participant may be presented.
In addition, the avatar identifier of each participant object may be sequentially displayed at a designated display position of the display interface of the AR device, for example, the avatar identifier of each participant object is sequentially displayed at the upper right corner of the display interface. When the number of the conference objects is large, objects such as conference key objects (e.g., conference host, conference speaker) can be preferentially shown.
In this embodiment of the disclosure, when the entity identifier is a work card identifier, a first AR special effect including entity detail information of the work card may be displayed on a display interface of the AR device.
In the case that the interaction identifier included in the entity detail information is the avatar identifier of the target object, the target object may identify the object to which the tile identifier belongs, and the avatar identifier may identify the avatar identifier of the object to which the tile identifier belongs.
Here, the avatar identifier may be an avatar provided in advance for the target object, and may be a customized cartoon avatar for the AR device.
In the embodiment of the present disclosure, in the case that the target object is a meeting object in a meeting room to which the meeting room identifier belongs, the object introduction information of the target object may be information included in the above-mentioned personal profile. In the case that the target object is an object to which the signboard identifies, the object introduction information of the target object may be information included in the personal introduction, and will not be described in detail herein.
In the embodiment of the present disclosure, after determining the object introduction information of the target object corresponding to the target avatar identification selected and triggered by the user, a second AR special effect including the object introduction information may be displayed in a display interface of the AR device.
Here, the object introduction information displayed by the second AR special effect may be displayed in a text or a video, and the display form of the object introduction information is not specifically limited in the present disclosure to meet the actual need.
In the above embodiment, the interactive identifier is set as the avatar identifier of the target object, the avatar identifier is triggered to obtain the object introduction information corresponding to the target object, and the second AR special effect containing the object introduction information is displayed, so that the user can check the object introduction information of the target object through simple triggering operation, the information obtaining mode of the user is simpler and faster, and the information obtaining efficiency of the user is further improved.
Case two: the interactive identification is the communication identification of the target object.
In this case, for step S31, in response to the trigger operation for the interaction identifier in the first AR special effect, determining the interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR device, specifically including the following steps:
step S51: in response to the triggering operation aiming at the communication identifier in the first AR special effect, determining the identifier content of the communication identifier, and determining a communication page associated with a target communication identifier selected and triggered by a user;
step S52: and controlling a display interface of the AR device to jump from the first AR special effect to a communication page containing the identification content.
In this embodiment of the present disclosure, in a case that the entity identifier is a meeting room identifier, a first AR special effect including entity detail information of the meeting room may be displayed on a display interface of an AR device.
In the case where the interaction identifier included in the entity detail information is a communication identifier of a target object, then the target object may be a participant object in the conference room, and the communication identifier may be a communication identifier of the participant object.
In specific implementation, a first AR special effect including a layout structure of a conference table in the conference room can be displayed on a display interface of the AR device. At each location of the conference table, the communication identification of each participant may be presented.
In addition, the communication identifier of each participant object may be sequentially displayed at a designated display position of the display interface of the AR device, for example, the communication identifier of each participant object is sequentially displayed at the upper right corner of the display interface. When the number of the conference objects is large, objects such as conference key objects (e.g., conference host, conference speaker) can be preferentially shown.
In this embodiment of the disclosure, when the entity identifier is a work card identifier, a first AR special effect including entity detail information of the work card may be displayed on a display interface of the AR device.
In the case that the interaction identifier included in the entity detail information is a communication identifier of the target object, the target object may identify an object to which the worklist belongs, and the communication identifier may identify a communication identifier of the object to which the worklist belongs.
In this disclosure, the communication identifier is used to indicate an identifier corresponding to a contact manner that can be associated with the target object, for example, the communication identifier may be a phone identifier, a mailbox identifier, a software identifier of the instant messaging software, or the like. The form of the communication identifier is not particularly limited in the present disclosure, so as to meet the actual requirement.
In the embodiment of the present disclosure, the identification content of the communication identifier is used to indicate specific content corresponding to a contact address that can be contacted with the target object. For example, the identification content of the communication identifier is a telephone number when the communication identifier is a telephone identifier, the identification content of the communication identifier is a mailbox account when the communication identifier is a mailbox identifier, and the identification content of the communication identifier is a software account or a mobile phone number when software is registered when the communication identifier is instant messaging software.
In the embodiment of the disclosure, in response to a trigger operation of a user on a communication identifier in a first AR special effect, an identifier content of the communication identifier may be determined, and a communication page related to a target communication identifier corresponding to the trigger operation may be determined.
For example, in the case where the destination communication identifier is a telephone identifier, the communication page of the destination communication identifier may be a dial page. And under the condition that the target communication identifier is the mailbox identifier, the communication page of the target communication identifier can be a mail editing page. Under the condition that the target communication identifier is the instant messaging software, the communication page of the target communication identifier can be a communication page of the instant messaging software or a friend interface is added. The communication page associated with the target communication identifier is not particularly limited by the present disclosure, so as to meet the actual requirement.
In the above embodiment, by setting the interaction identifier as the communication identifier of the target object and jumping to the communication page of the target communication identifier by triggering the target communication identifier, the user can directly send a message to the target object based on the communication page, thereby simplifying the contact path of the user contacting the target object and saving the operation cost and time cost of the user.
Further, for the above steps: after the identification content of the communication identification is determined in response to the triggering operation for the communication identification in the first AR special effect, a first sub AR special effect containing the identification content may be presented in a presentation interface of the AR device. At this time, the identification content may be saved in the AR device in response to a trigger operation of the user for the first sub AR special effect.
Case three: the interactive identification is the position identification of each participant object in the conference room.
In this case, for step S31, in response to the trigger operation for the interaction identifier in the first AR special effect, determining the interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR device, specifically including the following steps:
step S61: responding to a trigger operation aiming at a position identifier in the first AR special effect, and determining a target position identifier corresponding to the trigger operation;
step S62: acquiring object introduction information of a target participant corresponding to the target position identification;
step S63: and displaying a third AR special effect containing the object introduction information of the corresponding target participant object on a display interface of the AR equipment.
In this embodiment of the present disclosure, in a case that the entity identifier is a meeting room identifier, a first AR special effect including entity detail information of the meeting room may be displayed on a display interface of an AR device.
In the case where the interaction identifier included in the entity detail information is a location identifier of a target object, then the target object may be a participant object in the conference room, and the location identifier may be a location identifier of the participant object.
The position identification can be the object number of the conference object corresponding to each seat in the conference room, and can also be the seat number in the conference room.
In the embodiment of the present disclosure, in the case where the position of each participant in the conference room is identified as the object number of the participant corresponding to each seat, the respective seats in the conference room and the object numbers of the participants at the respective seats may be shown in the first AR special effect.
For example, as shown in fig. 3, assume that there is a rectangular conference table in the conference room, with 8 seats around the table. At this time, the object numbers of the participants corresponding to each seat in the conference room are shown in the first AR special effect. At this time, the object number is the location identifier of the corresponding participant object.
In the embodiment of the present disclosure, when the position identifier of each participant in the conference room is the seat number in the conference room, each seat in the conference room and the seat number of each seat may be displayed in the first AR special effect, and the seat number corresponding to each participant may be displayed.
For example, as shown in fig. 4, assume that there is a rectangular conference table in the conference room, with 8 seats around the table. In this case, the object information of each participant and the seat number corresponding to each participant can be displayed on the conference room side in the first AR special effect. At this time, the seat number is the position identifier of the corresponding participant.
In the embodiment of the present disclosure, in response to a trigger operation for a position identifier in a first AR special effect, a target position identifier corresponding to the trigger operation may be determined. Then, object introduction information of the corresponding target participant object with the target position identification can be acquired.
Here, the object introduction information of the target participant object is the same as or partially the same as the object introduction information obtained by triggering the avatar identification of the participant object in the conference room, and thus will not be described in detail herein.
In the embodiment of the present disclosure, the object introduction information may be text content or video content, and the display form of the object introduction information is not specifically limited in the present disclosure to meet the actual need.
In the embodiment of the present disclosure, after the object introduction information of the target participant object is obtained, a third AR special effect including the object introduction information may be displayed on a display interface of the AR device. In this case, the third AR special effect may be the same as or different from the second AR special effect determined in the case where the target object is a conference object in a conference room, and this disclosure does not specifically limit this to be achieved.
In the embodiment, the interactive identification is set as the position identification of each conference object in the conference room, the position identification in the first AR special effect is triggered, and the third AR special effect containing the object introduction information is displayed on the display interface of the AR equipment, so that the user can know the object information of the conference object corresponding to each seat in detail based on the seat in the conference room, the operation of the user is simplified, and the viewing efficiency of the user is improved.
Case four: the interaction identifier is a conference live identifier.
In this case, for step S31, in response to the trigger operation for the interaction identifier in the first AR special effect, determining the interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR device, specifically including the following steps:
step S71: responding to the trigger operation aiming at the conference live broadcast identification in the first AR special effect, and acquiring a real-time live broadcast picture of the conference room;
step S72: and displaying a fourth AR special effect containing the real-time live broadcast picture on a display interface of the AR equipment.
In the embodiment of the present disclosure, when the entity identifier is a meeting room identifier, the live meeting identifier of the meeting room to which the meeting room identifier belongs may be displayed in the first AR special effect by identifying the meeting room identifier.
At this time, a real-time live broadcast picture of the conference room may be acquired in response to a trigger operation for the live broadcast identifier of the conference, and a fourth AR special effect including the real-time live broadcast picture may be displayed in a display interface of the AR device.
In the embodiment of the present disclosure, for each reserved conference in the conference room, a conference live identifier may be correspondingly set. After the conference is started, the live conference identifier of the conference is in a state capable of being triggered, and after the conference is ended, the live conference identifier of the conference can be changed into a review conference identifier. The user can review the meeting screen of the meeting by triggering the meeting review identification.
Based on this, in the embodiment of the present disclosure, the state of the conference identifier corresponding to the conference may be determined according to the conference state of the reserved conference in the conference room, where the conference state may be a conference that has already ended, a conference that is in progress, and a conference that has not started yet.
For the conference which is already finished, the conference identifier can be a conference review identifier; for the ongoing conference, the conference identifier may be a live conference identifier; for conferences that have not yet started, the conference identification may be a conference not-started identification.
In this embodiment of the present disclosure, the conference review identifier of the conference that has ended may be displayed in the first AR special effect, and a conference live identifier corresponding to the conference that is in progress may be displayed. At this time, in response to a trigger operation for a corresponding conference identifier (e.g., a conference review identifier or a conference live identifier), a live playback of the conference that has ended or a live broadcast picture of the conference that is in progress may be acquired, and a fourth AR special effect including the live broadcast picture may be displayed on a display interface of the AR device.
In the above embodiment, the interactive identifier is set as the live conference identifier, so that a real-time live broadcast picture corresponding to the live conference identifier can be acquired by triggering the trigger operation of the live conference identifier, and a fourth AR special effect containing the real-time live broadcast picture is displayed in a display interface of the AR device. By the processing mode, the user can acquire the live broadcast picture of the conference in real time through the conference live broadcast identification, so that the mode that the user acquires the related information of the conference is simplified, and the channel for the user to acquire the related information of the conference is widened.
Case five: the interactive identifier is a conference room reservation identifier.
In this case, the method of the present disclosure specifically further includes the steps of:
step S81: responding to a trigger operation aiming at a meeting room appointment mark in the first AR special effect, determining meeting time to be appointed, and displaying the meeting time to be appointed in the first AR special effect; the meeting time to be reserved is a time period which is not reserved in the meeting room corresponding to the meeting room identification;
step S82: receiving a conference reservation operation of a user on a target conference time in the conference time to be reserved;
step S83: and generating conference reservation information based on the conference reservation operation, and displaying reservation success information in the first AR special effect.
In the embodiment of the present disclosure, when the entity identifier is a meeting room identifier, a corresponding meeting room appointment identifier may be set for a meeting room to which the meeting room identifier belongs, and a first AR special effect including the meeting room appointment identifier is displayed on a display interface of the AR device. The display form of the meeting room reservation identifier is not particularly limited in the present disclosure, so as to meet the actual requirement.
In the embodiment of the disclosure, in response to a trigger operation of a user for a meeting room reservation identifier in a first AR special effect, a period (that is, a meeting time to be reserved) that is not reserved in a meeting room corresponding to the meeting room identifier is determined, and the meeting time to be reserved is displayed in a display interface of an AR device.
In an alternative embodiment, conferences with different time lengths can be set in advance for the conference room corresponding to the conference room identifier. At this time, the conference time to be reserved may be a start time and an end time of each conference.
In another alternative embodiment, the start time of the conference room corresponding to the conference room identifier may be set. And then, the user can select the time length of the conference according to the requirement, and further determine the starting time of the next appointment-able conference according to the time length of the conference selected by the user.
The present disclosure does not specifically limit the time of the conference to be reserved, and the time is implemented as the standard.
After the to-be-reserved conference time is displayed in the first AR special effect, a conference reservation operation of a user for a target conference time in the to-be-reserved conference time may be received, and conference reservation information may be generated based on the conference reservation operation. After the conference reservation is successful, reservation success information can be shown in the first AR special effect.
In the above embodiment, the conference time to be reserved may be determined in response to a trigger operation for the meeting room reservation identifier in the first AR special effect, and the conference time to be reserved may be displayed in the first AR special effect. The conference reservation operation of the user on the conference time to be reserved is received, the conference reservation information is generated, the reservation success information is displayed in the first AR special effect, the user can reserve a conference room through simple operation, the operation of the user can be simplified, and meanwhile, the working efficiency of staff can be improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a data display apparatus in an augmented reality scene corresponding to the data display method in the augmented reality scene, and as the principle of the apparatus in the embodiment of the present disclosure for solving the problem is similar to the data display method in the augmented reality scene, the implementation of the apparatus may refer to the implementation of the method, and repeated parts are not repeated.
Referring to fig. 5, a schematic diagram of a data display apparatus in an augmented reality scene provided in an embodiment of the present disclosure is shown, where the apparatus includes: an acquisition unit 51, an identification unit 52 and a display unit 53; wherein the content of the first and second substances,
the acquiring unit is used for acquiring a real scene image acquired by the AR equipment;
the identification unit is used for identifying the entity identification in the real scene image and determining the associated entity matched with the entity identification; wherein the entity identification comprises a meeting room identification and/or a work card identification;
and the display unit is used for determining a first AR special effect containing the entity detail information of the associated entity and displaying the first AR special effect on a display interface of the AR equipment.
As can be seen from the above description, by recognizing an entity identifier (e.g., a meeting room identifier and/or a work card identifier) in an image of a real scene, an associated entity matching the entity identifier is further determined; then, the entity detail information of the associated entity in a real scene can be quickly and simply acquired by displaying the first AR special effect containing the entity detail information of the associated entity in the AR equipment, so that the working efficiency of the staff can be improved; meanwhile, the interestingness of a real scene can be enhanced.
In one possible embodiment, the entity detail information includes: meeting associated information of a meeting room to which the meeting room identifier belongs, and/or object information of a corresponding object of the workcard identifier; the conference associated information includes: meeting state information and/or object information for each participant in the meeting room.
In one possible embodiment, the apparatus is further configured to: and responding to the trigger operation aiming at the interaction identifier in the first AR special effect, determining interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR equipment.
In one possible embodiment, the apparatus is further configured to: in response to the triggering operation aiming at the head portrait identification in the first AR special effect, determining object introduction information of a target object corresponding to the target head portrait identification selected and triggered by the user; wherein the target object comprises: the work card mark belongs to an object, or the conference room mark belongs to a conference object in a conference room; and displaying a second AR special effect containing the object introduction information on a display interface of the AR equipment.
In one possible embodiment, the apparatus is further configured to: in response to the triggering operation aiming at the communication identifier in the first AR special effect, determining the identifier content of the communication identifier, and determining a communication page associated with a target communication identifier selected and triggered by a user; and controlling a display interface of the AR device to jump from the first AR special effect to a communication page containing the identification content.
In one possible embodiment, the apparatus is further configured to: responding to a trigger operation aiming at a position identifier in the first AR special effect, and determining a target position identifier corresponding to the trigger operation; acquiring object introduction information of a target participant corresponding to the target position identification; and displaying a third AR special effect containing the object introduction information of the corresponding target participant object on a display interface of the AR equipment.
In one possible embodiment, the apparatus is further configured to: responding to the trigger operation aiming at the conference live broadcast identification in the first AR special effect, and acquiring a real-time live broadcast picture of the conference room; and displaying a fourth AR special effect containing the real-time live broadcast picture on a display interface of the AR equipment.
In one possible embodiment, the display unit is further configured to: determining entity attributes of the associated entities; determining a target AR template matched with the entity attribute from an AR template library based on the entity attribute; determining the first AR special effect based on the target AR template and the entity detail information.
In one possible embodiment, the apparatus is further configured to: responding to a trigger operation aiming at a meeting room appointment mark in the first AR special effect, determining meeting time to be appointed, and displaying the meeting time to be appointed in the first AR special effect; the meeting time to be reserved is a time period which is not reserved in the meeting room corresponding to the meeting room identification; receiving a conference reservation operation of a user on a target conference time in the conference time to be reserved; and generating conference reservation information based on the conference reservation operation, and displaying reservation success information in the first AR special effect.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Corresponding to the data display method in the augmented reality scene in fig. 1, an embodiment of the present disclosure further provides a computer device 600, and as shown in fig. 6, a schematic structural diagram of the computer device 600 provided in the embodiment of the present disclosure includes:
a processor 61, a memory 62, and a bus 63; the memory 62 is used for storing execution instructions and includes a memory 621 and an external memory 622; the memory 621 is also referred to as an internal memory, and is used for temporarily storing the operation data in the processor 61 and the data exchanged with the external memory 622 such as a hard disk, the processor 61 exchanges data with the external memory 622 through the memory 621, and when the computer device 600 operates, the processor 61 communicates with the memory 62 through the bus 63, so that the processor 61 executes the following instructions:
acquiring a real scene image acquired by augmented reality AR equipment;
identifying entity identification in the real scene image, and determining a related entity matched with the entity identification; wherein the entity identification comprises a meeting room identification and/or a work card identification;
determining a first AR special effect containing entity detail information of the associated entity, and displaying the first AR special effect on a display interface of the AR device.
The embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the data presentation method in the augmented reality scenario in the embodiment of the foregoing method are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
An embodiment of the present disclosure further provides a computer program product, where the computer program product carries a program code, and an instruction included in the program code may be used to execute the step of the data display method in the augmented reality scenario in the foregoing method embodiment, which may be referred to specifically in the foregoing method embodiment, and is not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a real scene image by means of various visual correlation algorithms by acquiring the real scene image in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of relevant characteristics, states and attributes of the real scene image can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A data display method in an augmented reality scene is characterized by comprising the following steps:
acquiring a real scene image acquired by augmented reality AR equipment;
identifying entity identification in the real scene image, and determining a related entity matched with the entity identification; wherein the entity identification comprises a meeting room identification and/or a work card identification;
determining a first AR special effect containing entity detail information of the associated entity, and displaying the first AR special effect on a display interface of the AR device.
2. The method of claim 1, wherein the entity detail information comprises: meeting associated information of a meeting room to which the meeting room identifier belongs, and/or object information of a corresponding object of the workcard identifier; the conference associated information includes: meeting state information and/or object information for each participant in the meeting room.
3. The method according to claim 1 or 2, characterized in that the entity detail information further comprises an interaction identifier;
after the presentation interface of the AR device presents the first AR special effect, the method further comprises:
and responding to the trigger operation aiming at the interaction identifier in the first AR special effect, determining interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR equipment.
4. The method of claim 3, wherein the interaction identifier is an avatar identifier of a target object;
the determining, in response to the trigger operation for the interaction identifier in the first AR special effect, interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR device includes:
in response to the triggering operation aiming at the head portrait identification in the first AR special effect, determining object introduction information of a target object corresponding to the target head portrait identification selected and triggered by the user; wherein the target object comprises: the work card mark belongs to an object, or the conference room mark belongs to a conference object in a conference room;
and displaying a second AR special effect containing the object introduction information on a display interface of the AR equipment.
5. The method according to claim 3 or 4, wherein the interaction identifier is a communication identifier of a target object;
the determining, in response to the trigger operation for the interaction identifier in the first AR special effect, interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR device includes:
in response to the triggering operation aiming at the communication identifier in the first AR special effect, determining the identifier content of the communication identifier, and determining a communication page associated with a target communication identifier selected and triggered by a user;
and controlling a display interface of the AR device to jump from the first AR special effect to a communication page containing the identification content.
6. The method according to any one of claims 3 to 5, wherein the interactive identifier comprises a location identifier of each participant in the conference room;
the determining, in response to the trigger operation for the interaction identifier in the first AR special effect, interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR device includes:
responding to a trigger operation aiming at a position identifier in the first AR special effect, and determining a target position identifier corresponding to the trigger operation;
acquiring object introduction information of a target participant corresponding to the target position identification;
and displaying a third AR special effect containing the object introduction information of the corresponding target participant object on a display interface of the AR equipment.
7. The method of any of claims 3 to 6, wherein the interaction identifier comprises a conference live identifier;
the determining, in response to the trigger operation for the interaction identifier in the first AR special effect, interaction information corresponding to the interaction identifier, and displaying the interaction information on a display interface of the AR device includes:
responding to the trigger operation aiming at the conference live broadcast identification in the first AR special effect, and acquiring a real-time live broadcast picture of the conference room;
and displaying a fourth AR special effect containing the real-time live broadcast picture on a display interface of the AR equipment.
8. The method according to any of claims 1 to 7, wherein said determining a first AR special effect containing entity detail information of said associated entity comprises:
determining entity attributes of the associated entities;
determining a target AR template matched with the entity attribute from an AR template library based on the entity attribute;
determining the first AR special effect based on the target AR template and the entity detail information.
9. The method according to any one of claims 1 to 8, wherein said entity detail information further comprises a meeting room appointment identifier; the method further comprises the following steps:
responding to a trigger operation aiming at a meeting room appointment mark in the first AR special effect, determining meeting time to be appointed, and displaying the meeting time to be appointed in the first AR special effect; the meeting time to be reserved is a time period which is not reserved in the meeting room corresponding to the meeting room identification;
receiving a conference reservation operation of a user on a target conference time in the conference time to be reserved;
and generating conference reservation information based on the conference reservation operation, and displaying reservation success information in the first AR special effect.
10. A data presentation device in an augmented reality scene, comprising:
the acquiring unit is used for acquiring a real scene image acquired by the AR equipment;
the identification unit is used for identifying the entity identification in the real scene image and determining the associated entity matched with the entity identification; wherein the entity identification comprises a meeting room identification and/or a work card identification;
and the display unit is used for determining a first AR special effect containing the entity detail information of the associated entity and displaying the first AR special effect on a display interface of the AR equipment.
11. A computer device, comprising: processor, memory and bus, the memory stores machine readable instructions executable by the processor, when the computer device runs, the processor and the memory communicate through the bus, the machine readable instructions when executed by the processor perform the steps of the data presentation method in the augmented reality scene according to any one of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, performs the steps of the data presentation method in an augmented reality scenario according to any one of claims 1 to 9.
CN202111658810.2A 2021-12-30 2021-12-30 Data display method, device, equipment and medium in augmented reality scene Withdrawn CN114332426A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111658810.2A CN114332426A (en) 2021-12-30 2021-12-30 Data display method, device, equipment and medium in augmented reality scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111658810.2A CN114332426A (en) 2021-12-30 2021-12-30 Data display method, device, equipment and medium in augmented reality scene

Publications (1)

Publication Number Publication Date
CN114332426A true CN114332426A (en) 2022-04-12

Family

ID=81018991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111658810.2A Withdrawn CN114332426A (en) 2021-12-30 2021-12-30 Data display method, device, equipment and medium in augmented reality scene

Country Status (1)

Country Link
CN (1) CN114332426A (en)

Similar Documents

Publication Publication Date Title
US11182609B2 (en) Method and apparatus for recognition and matching of objects depicted in images
RU2735617C2 (en) Method, apparatus and system for displaying information
CN105338479B (en) Information processing method and device based on places
CN111638796A (en) Virtual object display method and device, computer equipment and storage medium
US20150103097A1 (en) Method and Device for Implementing Augmented Reality Application
Weilenmann et al. Selfies in the wild: Studying selfie photography as a local practice
US8941752B2 (en) Determining a location using an image
CN111640197A (en) Augmented reality AR special effect control method, device and equipment
CN103733196A (en) Method and apparatus for enabling a searchable history of real-world user experiences
US20190332804A1 (en) Method for information processing, device, server and storage medium
JP6064793B2 (en) Program and information sharing support system
US9607094B2 (en) Information communication method and information communication apparatus
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
CN103237165A (en) Method and electronic equipment for checking extended name card information in real time
KR101331162B1 (en) Social network service system based upon visual information of subjects
CN111640235A (en) Queuing information display method and device
CN111464859B (en) Method and device for online video display, computer equipment and storage medium
CN111651049B (en) Interaction method, device, computer equipment and storage medium
KR102407493B1 (en) Solution for making of art gallery employing virtual reality
US20210075754A1 (en) Method for sharing a photograph
KR101721980B1 (en) Method for sharing photo image using time and location information, server and computer-readable recording media using the same
US11409788B2 (en) Method for clustering at least two timestamped photographs
CN114332426A (en) Data display method, device, equipment and medium in augmented reality scene
WO2022262389A1 (en) Interaction method and apparatus, computer device and program product, storage medium
BR Information and communication technology application in tourism events, fairs and festivals in India

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220412

WW01 Invention patent application withdrawn after publication