CN117369633A - AR-based information interaction method and system - Google Patents

AR-based information interaction method and system Download PDF

Info

Publication number
CN117369633A
CN117369633A CN202311281115.8A CN202311281115A CN117369633A CN 117369633 A CN117369633 A CN 117369633A CN 202311281115 A CN202311281115 A CN 202311281115A CN 117369633 A CN117369633 A CN 117369633A
Authority
CN
China
Prior art keywords
user
target image
scene
image application
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311281115.8A
Other languages
Chinese (zh)
Inventor
秦科
吴忠凯
张继海
杨成功
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Iridium Technology Co ltd
Original Assignee
Shanghai Iridium Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Iridium Technology Co ltd filed Critical Shanghai Iridium Technology Co ltd
Priority to CN202311281115.8A priority Critical patent/CN117369633A/en
Publication of CN117369633A publication Critical patent/CN117369633A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Architecture (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides an AR-based information interaction method and an AR-based information interaction system, and relates to the technical field of data processing. In the invention, a scene image corresponding to a scene to be analyzed and a user image corresponding to a target image are combined by applying a user image corresponding to a user to form a user scene composite image; under the condition that the first user scene composite image and the second user scene composite image have the same scene image, carrying out association processing on a first target image application user corresponding to the first user scene composite image and a second target image application user corresponding to the second user scene composite image; and under the condition that the first target image application user and the second target image application user have information interaction requirements, performing target information interaction operation based on the first user scene composite image and the second user scene composite image so as to provide corresponding information interaction services. Based on the method, the richness of information interaction can be improved.

Description

AR-based information interaction method and system
Technical Field
The invention relates to the technical field of data processing, in particular to an AR-based information interaction method and an AR-based information interaction system.
Background
The augmented reality (Augmented Reality, AR) technology is a technology for skillfully fusing virtual information with a real world, and widely uses various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like, and applies virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer to the real world after simulation, wherein the two kinds of information are mutually complemented, so that the enhancement of the real world is realized. For example, in some applications, it is necessary to synthesize a user image and a scene image to form a synthesized image, for which the application to user information interaction is possible, but in the related art, there is a problem in that the richness of information interaction is not high.
Disclosure of Invention
In view of the above, the present invention aims to provide an information interaction method and system based on AR, so as to improve the richness of information interaction.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical scheme:
an AR-based information interaction method, comprising:
combining the scene image corresponding to the scene to be analyzed and the user image corresponding to the target image by applying the user image corresponding to the user to form a corresponding user scene composite image;
Under the condition that the first user scene composite image and the second user scene composite image have the same scene image, carrying out association processing on a first target image application user corresponding to the first user scene composite image and a second target image application user corresponding to the second user scene composite image so as to form a user association relationship between the target image application users;
and under the condition that the first target image application user and the second target image application user have information interaction requirements, performing target information interaction operation based on the first user scene composite image and the second user scene composite image so as to provide corresponding information interaction services for the first target image application user and the second target image application user.
In some preferred embodiments, in the above AR-based information interaction method, the step of performing, in a case where the first target image application user and the second target image application user have information interaction requirements, a target information interaction operation based on the first user scene composite image and the second user scene composite image to provide corresponding information interaction services for the first target image application user and the second target image application user includes:
Determining whether an information interaction requirement exists between the first target image application user and the second target image application user;
and under the condition that the first target image application user and the second target image application user have information interaction requirements, performing target information interaction operation based on the first user scene composite image and the second user scene composite image so as to provide corresponding information interaction services for the first target image application user and the second target image application user.
In some preferred embodiments, in the above AR-based information interaction method, the step of determining whether there is an information interaction requirement between the first target image application user and the second target image application user includes:
judging whether first information interaction request information sent by first user terminal equipment corresponding to the first target image application user is received or not, and determining that the first target image application user and the second target image application user have information interaction requirements under the condition that the first information interaction request information sent by the first user terminal equipment corresponding to the first target image application user is received, wherein the first information interaction request information is used for indicating that information interaction is required to be carried out with second user terminal equipment corresponding to the second target image application user;
Judging whether second information interaction request information sent by second user terminal equipment corresponding to the second target image application user is received or not, and determining that the first target image application user and the second target image application user have information interaction requirements under the condition that the second information interaction request information sent by the second user terminal equipment corresponding to the second target image application user is received, wherein the second information interaction request information is used for indicating that information interaction is required with first user terminal equipment corresponding to the first target image application user.
In some preferred embodiments, in the above AR-based information interaction method, the step of performing, in a case where the first target image application user and the second target image application user have information interaction requirements, a target information interaction operation based on the first user scene composite image and the second user scene composite image to provide corresponding information interaction services for the first target image application user and the second target image application user includes:
under the condition that the first target image application user and the second target image application user have information interaction requirements, integrating first interaction information corresponding to the first target image application user and the first user scene composite image to form first integrated data;
Under the condition that the first target image application user and the second target image application user have information interaction requirements, integrating second interaction information corresponding to the second target image application user and the second user scene composite image to form second integrated data;
and sending the first integrated data to the second user terminal equipment, and sending the second integrated data to the first user terminal equipment so as to realize user information interaction.
In some preferred embodiments, in the above AR-based information interaction method, the step of integrating, in a case where the first target image application user and the second target image application user have information interaction requirements, first interaction information corresponding to the first target image application user and the first user scene composite image to form first integrated data includes:
under the condition that the first target image application user and the second target image application user have information interaction requirements, analyzing first interaction information corresponding to the first target image application user and the first user scene composite image to determine a first target image area matched with the first interaction information in the first user scene composite image;
And merging the first interaction information into the first target image area in the first user scene composite image to form corresponding first integration data.
In some preferred embodiments, in the above AR-based information interaction method, the step of performing an analysis operation on the first interaction information corresponding to the first target image application user and the first user scene composite image to determine a first target image area matching the first interaction information in the first user scene composite image when the first target image application user and the second target image application user have information interaction requirements includes:
under the condition that the first target image application user and the second target image application user have information interaction requirements, determining the information area size of the first interaction information based on the first interaction information corresponding to the first target image application user;
determining a plurality of candidate image areas in the first user scene composite image based on the information area size, each candidate image area having the information area size;
Performing key information mining operation on each candidate image area to form image key information characteristic representations corresponding to each candidate image area;
performing key information mining operation on the first interactive information to form interactive key information characteristic representation corresponding to the first interactive information;
respectively performing splicing operation on the image key information characteristic representation corresponding to each candidate image region and the interaction key information characteristic representation to form a corresponding splicing characteristic representation;
performing matching analysis operation on each spliced characteristic representation by utilizing a pre-trained matching analysis neural network so as to output region matching parameters of candidate image regions corresponding to each spliced characteristic representation, wherein the region matching parameters are used for reflecting the matching degree between the corresponding candidate image regions and the first interaction information;
and determining a candidate image area corresponding to the area matching parameter with the maximum value as a first target image area matched with the first interaction information.
In some preferred embodiments, in the above AR-based information interaction method, the step of performing a merging operation on the scene image corresponding to the scene to be analyzed and the user image corresponding to the target image to form a corresponding user scene composite image includes:
Extracting a scene feedback data cluster to be analyzed corresponding to a scene to be analyzed, wherein the scene feedback data to be analyzed in the scene feedback data cluster to be analyzed comprises feedback action types of a scene image to be analyzed of an image application user, and the scene image to be analyzed belongs to a scene image corresponding to the scene to be analyzed;
extracting a user feedback state ordered set corresponding to the scene to be analyzed, wherein the user feedback state ordered set comprises a plurality of user feedback states which are ordered according to the user feedback state goodness, and the user feedback state ordered set is processed based on feedback action types corresponding to action type change paths of the scene to be analyzed to form the user feedback state ordered set;
determining a reference feedback action type corresponding to the user feedback state in the user feedback state ordered set;
analyzing an image application user associated with a feedback action type and the reference feedback action type based on the scene feedback data cluster to be analyzed, and determining a candidate image application user corresponding to the user feedback state according to the image application user associated with the feedback action type and the reference feedback action type;
Determining a first user feedback state in the ordered set of user feedback states, and analyzing the number of image application users corresponding to each piece of preset user characterization information in a plurality of candidate image application users corresponding to the first user feedback state; determining the identification user characterization information corresponding to the first user feedback state according to the number of image application users corresponding to the preset user characterization information; and determining candidate image application users with the identification user characterization information from candidate image application users corresponding to the second user feedback state, and marking the candidate image application users as target image application users, wherein the user feedback state quality corresponding to the second user feedback state is worse than the user feedback state quality corresponding to the first user feedback state;
and combining the scene image corresponding to the scene to be analyzed and the user image corresponding to the target image application user so as to form a corresponding user scene composite image.
The embodiment of the invention also provides an AR-based information interaction system, which comprises:
the image merging module is used for merging the scene image corresponding to the scene to be analyzed and the user image corresponding to the target image by applying the user to form a corresponding user scene composite image;
The user association module is used for carrying out association processing on a first target image application user corresponding to the first user scene synthetic image and a second target image application user corresponding to the second user scene synthetic image under the condition that the first user scene synthetic image and the second user scene synthetic image have the same scene image so as to form a user association relationship between the target image application users;
and the information interaction module is used for carrying out target information interaction operation based on the first user scene composite image and the second user scene composite image under the condition that the first target image application user and the second target image application user have information interaction requirements so as to provide corresponding information interaction services for the first target image application user and the second target image application user.
In some preferred embodiments, in the above AR-based information interaction system, the information interaction module is specifically configured to:
determining whether an information interaction requirement exists between the first target image application user and the second target image application user;
and under the condition that the first target image application user and the second target image application user have information interaction requirements, performing target information interaction operation based on the first user scene composite image and the second user scene composite image so as to provide corresponding information interaction services for the first target image application user and the second target image application user.
In some preferred embodiments, in the above AR-based information interaction system, in a case where the first target image application user and the second target image application user have information interaction requirements, performing a target information interaction operation based on the first user scene composite image and the second user scene composite image to provide corresponding information interaction services for the first target image application user and the second target image application user, including:
under the condition that the first target image application user and the second target image application user have information interaction requirements, integrating first interaction information corresponding to the first target image application user and the first user scene composite image to form first integrated data;
under the condition that the first target image application user and the second target image application user have information interaction requirements, integrating second interaction information corresponding to the second target image application user and the second user scene composite image to form second integrated data;
and sending the first integrated data to the second user terminal equipment, and sending the second integrated data to the first user terminal equipment so as to realize user information interaction.
The AR-based information interaction method and system provided by the embodiment of the invention can firstly apply the merging operation of the scene image corresponding to the scene to be analyzed and the user image corresponding to the target image to form the user scene composite image; under the condition that the first user scene composite image and the second user scene composite image have the same scene image, carrying out association processing on a first target image application user corresponding to the first user scene composite image and a second target image application user corresponding to the second user scene composite image; and under the condition that the first target image application user and the second target image application user have information interaction requirements, performing target information interaction operation based on the first user scene composite image and the second user scene composite image so as to provide corresponding information interaction services. Based on the above, the information interaction is performed based on the user scene composite images of both sides, that is, the target information interaction operation is performed based on the first user scene composite image and the second user scene composite image, so that the richness of the information interaction can be improved, thereby improving the defects in the prior art.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a block diagram of an AR-based information interaction platform according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating steps included in the AR-based information interaction method according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of each module included in the AR-based information interaction system according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in FIG. 1, the embodiment of the invention provides an AR-based information interaction platform. Wherein the AR-based information interaction platform may include a memory and a processor.
In detail, the memory and the processor are electrically connected directly or indirectly to realize transmission or interaction of data. For example, electrical connection may be made to each other via one or more communication buses or signal lines. The memory may store at least one software functional module (computer program) that may exist in the form of software or firmware. The processor may be configured to execute an executable computer program stored in the memory, thereby implementing an AR-based information interaction method provided by an embodiment of the present invention (as described below).
Alternatively, in some embodiments, the Memory may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), and the like.
Alternatively, in some embodiments, the processor may be a general purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a System on Chip (SoC), etc.; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
Alternatively, in some embodiments, the AR-based information interaction platform may be a server with data processing capabilities.
With reference to fig. 2, the embodiment of the invention further provides an information interaction method based on AR, which can be applied to the information interaction platform based on AR. The method steps defined by the flow related to the AR-based information interaction method can be realized by the AR-based information interaction platform.
The specific flow shown in fig. 2 will be described in detail.
Step S100, combining the scene image corresponding to the scene to be analyzed and the user image corresponding to the target image by applying the user image to form a corresponding user scene composite image.
In the embodiment of the invention, the AR-based information interaction platform can apply the user image corresponding to the user to the scene image corresponding to the scene to be analyzed and the target image to perform the merging operation so as to form the corresponding user scene composite image.
Step S200, in the case that the first user scene composite image and the second user scene composite image have the same scene image, performing association processing on a first target image application user corresponding to the first user scene composite image and a second target image application user corresponding to the second user scene composite image, so as to form a user association relationship between the target image application users.
In the embodiment of the invention, the AR-based information interaction platform can perform association processing on a first target image application user corresponding to the first user scene composite image and a second target image application user corresponding to the second user scene composite image under the condition that the first user scene composite image and the second user scene composite image have the same scene image, so as to form a user association relationship between the target image application users.
And step S300, performing target information interaction operation based on the first user scene composite image and the second user scene composite image under the condition that the first target image application user and the second target image application user have information interaction requirements so as to provide corresponding information interaction services for the first target image application user and the second target image application user.
In the embodiment of the present invention, the AR-based information interaction platform may perform, under the condition that the first target image application user and the second target image application user have information interaction requirements, target information interaction operations based on the first user scene composite image and the second user scene composite image, so as to provide corresponding information interaction services for the first target image application user and the second target image application user, that is, provide corresponding information interaction services for the first target image application user and the second target image application user.
Based on the above, the information interaction is performed based on the user scene composite images of both sides, that is, the target information interaction operation is performed based on the first user scene composite image and the second user scene composite image, so that the richness of the information interaction can be improved, thereby improving the defects in the prior art.
Optionally, in some embodiments, the step of performing the merging operation on the scene image corresponding to the scene to be analyzed and the target image to form the corresponding user scene composite image may further include the following content, such as step S110, step S120, step S130, step S140, step S150, and step S160.
Step S110, extracting a feedback data cluster of the scene to be analyzed corresponding to the scene to be analyzed.
In the embodiment of the invention, the AR-based information interaction platform can extract the feedback data cluster of the scene to be analyzed corresponding to the scene to be analyzed. The scene feedback data to be analyzed in the scene feedback data cluster to be analyzed comprises feedback action types of the scene images to be analyzed of the image application user, and the scene images to be analyzed belong to the scene images corresponding to the scene to be analyzed. The feedback action type may refer to a processing action on the scene image to be analyzed, for example, the feedback action may be sequentially divided into receiving the scene image to be analyzed, viewing the scene image to be analyzed, editing the scene image to be analyzed, sending the scene image to be analyzed, and the like according to a sequence from a small processing depth to a large processing depth.
Step S120, extracting an ordered set of user feedback states corresponding to the scene to be analyzed.
In the embodiment of the invention, the AR-based information interaction platform can extract the ordered set of user feedback states corresponding to the scene to be analyzed. The user feedback state ordered set comprises a plurality of user feedback states which are ordered according to the user feedback state goodness, and the user feedback state ordered set is processed based on feedback action types corresponding to action type change paths of the scene to be analyzed to form the user feedback state ordered set. In addition, the feedback action types in the action type change path may be ordered according to the depth of the corresponding process, such as the order from small to large.
Step S130, determining the reference feedback action type corresponding to the user feedback state in the ordered set of user feedback states.
In the embodiment of the invention, the AR-based information interaction platform can determine the reference feedback action type corresponding to the user feedback state in the ordered set of user feedback states. The number of reference feedback action categories may be greater than or equal to 1, i.e. more than one.
Step S140, based on the feedback data cluster of the scene to be analyzed, analyzing an image application user associated with the feedback action type and the reference feedback action type, and determining a candidate image application user corresponding to the user feedback state according to the image application user associated with the feedback action type and the reference feedback action type.
In the embodiment of the invention, the AR-based information interaction platform can analyze the image application users associated with the feedback action types and the reference feedback action types based on the scene feedback data clusters to be analyzed, and determine candidate image application users corresponding to the user feedback states according to the image application users associated with the feedback action types and the reference feedback action types. Wherein, the association may refer to having consistency, and thus, may be determined by one-to-one comparison.
Step S150, determining a first user feedback state in the ordered set of user feedback states, and analyzing the number of image application users corresponding to each piece of preset user characterization information in a plurality of candidate image application users corresponding to the first user feedback state; determining the identification user characterization information corresponding to the first user feedback state according to the number of image application users corresponding to the preset user characterization information; and determining the candidate image application user with the identification user characterization information from the candidate image application users corresponding to the second user feedback state, and marking the candidate image application user as a target image application user.
In the embodiment of the invention, the AR-based information interaction platform can determine a first user feedback state in the ordered set of user feedback states, and analyze the number of image application users corresponding to each piece of preset user characterization information in a plurality of candidate image application users corresponding to the first user feedback state; determining the identification user characterization information corresponding to the first user feedback state according to the number of image application users corresponding to the preset user characterization information; and determining the candidate image application user with the identification user characterization information from the candidate image application users corresponding to the second user feedback state, and marking the candidate image application user as a target image application user. And the user feedback state quality corresponding to the second user feedback state is worse than the user feedback state quality corresponding to the first user feedback state, such as shallower processing progress. In addition, the first user feedback state may be any one of the ordered set of user feedback states, and the first user feedback state may be determined from the ordered set of user feedback states according to actual requirements. The preset user characterization information can be determined according to actual requirements, and can be used for characterizing preference characteristics of image application users for different scenes and the like. In addition, the number of image application users larger than the preset number of users can be selected from the numbers of image application users respectively corresponding to each piece of preset user characterization information to serve as the number of candidate image application users, and the preset user characterization information corresponding to the number of candidate image application users is used as the identification user characterization information.
Step S160, performing a merging operation on the scene image corresponding to the scene to be analyzed and the user image corresponding to the target image by applying the user to form a corresponding user scene composite image.
In the embodiment of the invention, the AR-based information interaction platform can perform merging operation on the scene image corresponding to the scene to be analyzed and the user image corresponding to the target image application user so as to form a corresponding user scene composite image. For example, the scene image may be a background portion and the user image may be a foreground portion.
Based on the method, the scene image corresponding to the scene to be analyzed and the user image corresponding to the target image application user can be matched with each other, so that the reliability of information synthesis can be improved, and the defects (such as any combination between the scene image and the user image) in the prior art can be improved.
Optionally, in some embodiments, the step of analyzing, based on the feedback data cluster of the scene to be analyzed, the image application user associated with the feedback action type and the reference feedback action type, and determining, according to the image application user associated with the feedback action type and the reference feedback action type, the candidate image application user corresponding to the feedback state of the user may further include the following contents:
Determining to-be-analyzed scene feedback data corresponding to each image application user in the to-be-analyzed scene feedback data clusters, and forming user scene feedback data clusters corresponding to the image application users, namely respectively forming user scene feedback data clusters corresponding to each image application user;
and under the condition that the feedback action types in the user scene feedback data cluster are associated with the reference feedback action types, marking the image application user so as to mark the candidate image application user corresponding to the user feedback state.
Optionally, in some embodiments, the user feedback state corresponds to a state precedence relationship, such as the foregoing processing depth, and the state precedence relationship and the user feedback state goodness corresponding to the user feedback state have a positive correlation relationship, such as the shallower the processing depth, the worse the goodness, based on which, the step of determining the reference feedback action type corresponding to the user feedback state in the ordered set of user feedback states may further include the following contents:
based on the state sequence relation, determining the current user feedback state in the ordered set of user feedback states in sequence, namely sequentially serving as the current user feedback state, and determining the current reference feedback action type corresponding to the current user feedback state.
Based on the above, the step of marking the image application user with a marking operation to mark the candidate image application user corresponding to the user feedback state in the case that the feedback action type in the user scene feedback data cluster is associated with the reference feedback action type includes:
under the condition that the feedback action types in the user scene feedback data cluster are associated with the current reference feedback action types, marking the image application user so as to mark the candidate image application user corresponding to the current user feedback state;
and under the condition that the feedback action types in the user scene feedback data cluster are not associated with the current reference feedback action types, executing the step of determining the current user feedback state in the ordered set of user feedback states in sequence based on the state sequence in a revolving way, so that the current user feedback state is adjusted, namely updating is realized, and ending under the condition that the feedback action types in the user scene feedback data cluster are associated with the current reference feedback action types.
Optionally, in some embodiments, the step of analyzing, based on the feedback data cluster of the scene to be analyzed, the image application user associated with the feedback action type and the reference feedback action type, and determining, according to the image application user associated with the feedback action type and the reference feedback action type, the candidate image application user corresponding to the feedback state of the user may further include the following contents:
Grouping the scene feedback data clusters to be analyzed based on feedback action types to form action scene feedback data clusters corresponding to each feedback action type, namely, feedback action types corresponding to the scene feedback data to be analyzed in one action scene feedback data cluster are consistent;
determining an action scene feedback data cluster corresponding to the reference feedback action type;
and determining candidate image application users corresponding to the user feedback states according to the action scene feedback data clusters corresponding to the reference feedback action types.
Optionally, in some embodiments, the user feedback state corresponds to a state precedence relationship, where the state precedence relationship and the user feedback state goodness corresponding to the user feedback state have a positive correlation relationship, based on which, the step of determining, according to the action scene feedback data cluster corresponding to the reference feedback action type, that the candidate image corresponding to the user feedback state applies to the user may further include the following content:
determining a current action scene feedback data cluster corresponding to a reference feedback action type corresponding to a current user feedback state, and merging image application users in the current action scene feedback data cluster to form a current image application user cluster;
Determining a prior action scene feedback data cluster corresponding to a reference feedback action type corresponding to a prior user feedback state, and carrying out merging operation on image application users in the prior action scene feedback data cluster to form a prior image application user cluster, wherein the prior user feedback state is a user feedback state with a state sequence relation earlier than the current user feedback state in the ordered set of user feedback states, and if the processing depth is shallower;
and eliminating the prior image application user cluster from the current image application user cluster to form candidate image application users corresponding to the current user feedback state.
Optionally, in some embodiments, the step of extracting the ordered set of user feedback states corresponding to the scene to be analyzed may further include the following contents:
determining a target action type change path, wherein the target action type change path comprises a plurality of feedback action types based on state depth ordering, and the state depth is the processing depth, and the target action type change path refers to an action type change path corresponding to a scene to be analyzed;
Determining action continuation probability corresponding to each feedback action type, wherein the action continuation probability is used for reflecting the probability that the next feedback action type continues to appear after the processing action corresponding to the feedback action type appears for the image application user generating the feedback action type;
and processing the target action type change path according to the action continuation probability corresponding to each feedback action type to form a user feedback state ordered set corresponding to the scene to be analyzed, wherein the reference feedback action type corresponding to the user feedback state is distributed to the feedback action type corresponding to the user feedback state.
Optionally, in some embodiments, the step of determining the action continuation probability corresponding to each feedback action type may further include the following:
determining a typical scene feedback data cluster to be analyzed corresponding to the target action type change path, wherein the scene feedback data to be analyzed in the typical scene feedback data cluster to be analyzed comprises feedback action types of typical application users on typical scene images, and the typical scene images belong to scene images corresponding to typical scenes corresponding to the target action type change path;
Determining the number of typical application users of the same feedback action type in the feedback data cluster of the typical scene to be analyzed so as to output the number of typical users corresponding to the feedback action type;
and determining the action continuation probability corresponding to the feedback action type according to the typical user number corresponding to the feedback action type.
Optionally, in some embodiments, the step of determining the action continuation probability corresponding to the feedback action type according to the typical user number corresponding to the feedback action type may further include the following contents:
determining the number of current typical users corresponding to the current feedback action type;
determining the number of continuous typical users corresponding to the continuous feedback action type, wherein the continuous feedback action type can be the next feedback action type with deeper processing depth;
and determining the ratio between the number of continuous typical users and the current typical users to obtain the action continuous probability corresponding to the current feedback action type.
Optionally, in some embodiments, the step of processing the target action type change path according to the action continuation probability corresponding to each feedback action type to form an ordered set of user feedback states corresponding to the scene to be analyzed may further include the following contents:
Determining a probability difference value between the motion continuation probabilities of adjacent feedback motion types in the target motion type change path according to the motion continuation probability corresponding to each feedback motion type;
and allocating adjacent feedback action types with the probability difference value smaller than or equal to the reference probability difference value to the same user feedback state, for example, acquiring action continuation probability of a first feedback action type as first action continuation probability, acquiring action continuation probability of a second feedback action type as second action continuation probability, carrying out difference operation on the first action continuation probability and the second action continuation probability, taking a result of the difference operation as a probability difference value, comparing the probability difference value with the reference probability difference value, and dividing the first feedback action type and the second feedback action type into the same user feedback state when the probability difference value is smaller than the reference probability difference value, otherwise, dividing the first feedback action type and the second feedback action type into different user feedback states. Wherein the first feedback action type and the second feedback action type are adjacent feedback action types in the target action type change path.
Optionally, in some embodiments, the step of performing, in a case where the first target image application user and the second target image application user have information interaction requirements, a target information interaction operation based on the first user scene composite image and the second user scene composite image to provide corresponding information interaction services for the first target image application user and the second target image application user may further include the following contents:
determining whether an information interaction requirement exists between the first target image application user and the second target image application user;
and under the condition that the first target image application user and the second target image application user have information interaction requirements, performing target information interaction operation based on the first user scene composite image and the second user scene composite image so as to provide corresponding information interaction services for the first target image application user and the second target image application user.
Optionally, in some embodiments, the step of determining whether there is a need for information interaction between the first target image application user and the second target image application user may further include the following:
Judging whether first information interaction request information sent by first user terminal equipment corresponding to the first target image application user is received or not, and determining that the first target image application user and the second target image application user have information interaction requirements under the condition that the first information interaction request information sent by the first user terminal equipment corresponding to the first target image application user is received, wherein the first information interaction request information is used for indicating that information interaction is required to be carried out with second user terminal equipment corresponding to the second target image application user;
judging whether second information interaction request information sent by second user terminal equipment corresponding to the second target image application user is received or not, and determining that the first target image application user and the second target image application user have information interaction requirements under the condition that the second information interaction request information sent by the second user terminal equipment corresponding to the second target image application user is received, wherein the second information interaction request information is used for indicating that information interaction is required with first user terminal equipment corresponding to the first target image application user.
Optionally, in some embodiments, the step of performing, in a case where the first target image application user and the second target image application user have information interaction requirements, a target information interaction operation based on the first user scene composite image and the second user scene composite image to provide corresponding information interaction services for the first target image application user and the second target image application user may further include the following contents:
under the condition that the first target image application user and the second target image application user have information interaction requirements, integrating first interaction information corresponding to the first target image application user and the first user scene composite image to form first integrated data;
under the condition that the first target image application user and the second target image application user have information interaction requirements, integrating second interaction information corresponding to the second target image application user and the second user scene composite image to form second integrated data;
and sending the first integrated data to the second user terminal equipment, and sending the second integrated data to the first user terminal equipment so as to realize user information interaction.
Optionally, in some embodiments, when the first target image application user and the second target image application user have information interaction requirements, the step of integrating the first interaction information corresponding to the first target image application user and the first user scene composite image to form first integrated data may further include the following contents:
under the condition that the first target image application user and the second target image application user have information interaction requirements, analyzing first interaction information corresponding to the first target image application user and the first user scene composite image to determine a first target image area matched with the first interaction information in the first user scene composite image;
and merging the first interaction information into the first target image area in the first user scene composite image to form corresponding first integration data.
Optionally, in some embodiments, the step of performing an analysis operation on the first interaction information corresponding to the first target image application user and the first user scene composite image to determine a first target image area matching the first interaction information in the first user scene composite image when the first target image application user and the second target image application user have information interaction requirements may further include the following contents:
Under the condition that the first target image application user and the second target image application user have information interaction requirements, determining the information area size of the first interaction information based on the first interaction information corresponding to the first target image application user;
determining a plurality of candidate image areas in the first user scene composite image based on the information area size, each candidate image area having the information area size;
performing key information mining operation on each candidate image region, for example, mapping to a feature space to form an image key information feature representation corresponding to each candidate image region;
performing key information mining operation on the first interactive information, for example, mapping to a feature space to form interactive key information feature representation corresponding to the first interactive information;
respectively performing splicing operation on the image key information characteristic representation corresponding to each candidate image region and the interaction key information characteristic representation to form a corresponding splicing characteristic representation;
performing matching analysis operation on each spliced characteristic representation by utilizing a matching analysis neural network formed by pre-training to output region matching parameters of candidate image regions corresponding to each spliced characteristic representation, wherein the region matching parameters are used for reflecting the matching degree between the corresponding candidate image regions and the first interaction information, and in the process of training the matching analysis neural network, corresponding sample data can have actual region matching parameters so as to perform error determination, and then performing network training based on the determined errors, so that the matching analysis function is realized;
And determining a candidate image area corresponding to the area matching parameter with the maximum value as a first target image area matched with the first interaction information.
With reference to fig. 3, an embodiment of the present invention further provides an information interaction system based on AR, which may be applied to the above information interaction platform based on AR. Wherein, the AR-based information interaction system comprises:
the image merging module is used for merging the scene image corresponding to the scene to be analyzed and the user image corresponding to the target image by applying the user to form a corresponding user scene composite image;
the user association module is used for carrying out association processing on a first target image application user corresponding to the first user scene synthetic image and a second target image application user corresponding to the second user scene synthetic image under the condition that the first user scene synthetic image and the second user scene synthetic image have the same scene image so as to form a user association relationship between the target image application users;
and the information interaction module is used for carrying out target information interaction operation based on the first user scene composite image and the second user scene composite image under the condition that the first target image application user and the second target image application user have information interaction requirements so as to provide corresponding information interaction services for the first target image application user and the second target image application user.
Optionally, in some embodiments, the information interaction module is specifically configured to:
determining whether an information interaction requirement exists between the first target image application user and the second target image application user;
and under the condition that the first target image application user and the second target image application user have information interaction requirements, performing target information interaction operation based on the first user scene composite image and the second user scene composite image so as to provide corresponding information interaction services for the first target image application user and the second target image application user.
Optionally, in some embodiments, in a case that the first target image application user and the second target image application user have information interaction requirements, performing a target information interaction operation based on the first user scene composite image and the second user scene composite image to provide corresponding information interaction services for the first target image application user and the second target image application user, including:
under the condition that the first target image application user and the second target image application user have information interaction requirements, integrating first interaction information corresponding to the first target image application user and the first user scene composite image to form first integrated data;
Under the condition that the first target image application user and the second target image application user have information interaction requirements, integrating second interaction information corresponding to the second target image application user and the second user scene composite image to form second integrated data;
and sending the first integrated data to the second user terminal equipment, and sending the second integrated data to the first user terminal equipment so as to realize user information interaction.
In summary, according to the information interaction method and system based on AR provided by the invention, the scene image corresponding to the scene to be analyzed and the user image corresponding to the target image can be combined to form the user scene composite image; under the condition that the first user scene composite image and the second user scene composite image have the same scene image, carrying out association processing on a first target image application user corresponding to the first user scene composite image and a second target image application user corresponding to the second user scene composite image; and under the condition that the first target image application user and the second target image application user have information interaction requirements, performing target information interaction operation based on the first user scene composite image and the second user scene composite image so as to provide corresponding information interaction services. Based on the above, the information interaction is performed based on the user scene composite images of both sides, that is, the target information interaction operation is performed based on the first user scene composite image and the second user scene composite image, so that the richness of the information interaction can be improved, thereby improving the defects in the prior art.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An AR-based information interaction method, comprising:
combining the scene image corresponding to the scene to be analyzed and the user image corresponding to the target image by applying the user image corresponding to the user to form a corresponding user scene composite image;
under the condition that the first user scene composite image and the second user scene composite image have the same scene image, carrying out association processing on a first target image application user corresponding to the first user scene composite image and a second target image application user corresponding to the second user scene composite image so as to form a user association relationship between the target image application users;
and under the condition that the first target image application user and the second target image application user have information interaction requirements, performing target information interaction operation based on the first user scene composite image and the second user scene composite image so as to provide corresponding information interaction services for the first target image application user and the second target image application user.
2. The AR-based information interaction method of claim 1, wherein the step of performing a target information interaction operation based on the first user scene composite image and the second user scene composite image to provide the first target image application user and the second target image application user with corresponding information interaction services in case that the first target image application user and the second target image application user have information interaction requirements, comprises:
determining whether an information interaction requirement exists between the first target image application user and the second target image application user;
and under the condition that the first target image application user and the second target image application user have information interaction requirements, performing target information interaction operation based on the first user scene composite image and the second user scene composite image so as to provide corresponding information interaction services for the first target image application user and the second target image application user.
3. The AR-based information interaction method of claim 2, wherein the determining whether there is an information interaction requirement between the first target image application user and the second target image application user comprises:
Judging whether first information interaction request information sent by first user terminal equipment corresponding to the first target image application user is received or not, and determining that the first target image application user and the second target image application user have information interaction requirements under the condition that the first information interaction request information sent by the first user terminal equipment corresponding to the first target image application user is received, wherein the first information interaction request information is used for indicating that information interaction is required to be carried out with second user terminal equipment corresponding to the second target image application user;
judging whether second information interaction request information sent by second user terminal equipment corresponding to the second target image application user is received or not, and determining that the first target image application user and the second target image application user have information interaction requirements under the condition that the second information interaction request information sent by the second user terminal equipment corresponding to the second target image application user is received, wherein the second information interaction request information is used for indicating that information interaction is required with first user terminal equipment corresponding to the first target image application user.
4. The AR-based information interaction method of claim 2, wherein the step of performing a target information interaction operation based on the first user scene composite image and the second user scene composite image to provide the first target image application user and the second target image application user with corresponding information interaction services in case that the first target image application user and the second target image application user have information interaction requirements, comprises:
under the condition that the first target image application user and the second target image application user have information interaction requirements, integrating first interaction information corresponding to the first target image application user and the first user scene composite image to form first integrated data;
under the condition that the first target image application user and the second target image application user have information interaction requirements, integrating second interaction information corresponding to the second target image application user and the second user scene composite image to form second integrated data;
and sending the first integrated data to the second user terminal equipment, and sending the second integrated data to the first user terminal equipment so as to realize user information interaction.
5. The AR-based information interaction method according to claim 4, wherein the step of integrating the first interaction information corresponding to the first target image application user and the first user scene composite image to form first integrated data in the case that the first target image application user and the second target image application user have information interaction requirements, comprises:
under the condition that the first target image application user and the second target image application user have information interaction requirements, analyzing first interaction information corresponding to the first target image application user and the first user scene composite image to determine a first target image area matched with the first interaction information in the first user scene composite image;
and merging the first interaction information into the first target image area in the first user scene composite image to form corresponding first integration data.
6. The AR-based information interaction method according to claim 5, wherein the step of performing an analysis operation on the first interaction information corresponding to the first target image application user and the first user scene composite image to determine a first target image area matching the first interaction information in the first user scene composite image in case that the first target image application user and the second target image application user have information interaction requirements, comprises:
Under the condition that the first target image application user and the second target image application user have information interaction requirements, determining the information area size of the first interaction information based on the first interaction information corresponding to the first target image application user;
determining a plurality of candidate image areas in the first user scene composite image based on the information area size, each candidate image area having the information area size;
performing key information mining operation on each candidate image area to form image key information characteristic representations corresponding to each candidate image area;
performing key information mining operation on the first interactive information to form interactive key information characteristic representation corresponding to the first interactive information;
respectively performing splicing operation on the image key information characteristic representation corresponding to each candidate image region and the interaction key information characteristic representation to form a corresponding splicing characteristic representation;
performing matching analysis operation on each spliced characteristic representation by utilizing a pre-trained matching analysis neural network so as to output region matching parameters of candidate image regions corresponding to each spliced characteristic representation, wherein the region matching parameters are used for reflecting the matching degree between the corresponding candidate image regions and the first interaction information;
And determining a candidate image area corresponding to the area matching parameter with the maximum value as a first target image area matched with the first interaction information.
7. The AR-based information interaction method according to any one of claims 1 to 6, wherein the step of applying a user image corresponding to a user to the scene image corresponding to the scene to be analyzed and the target image to perform a merging operation to form a corresponding user scene composite image includes:
extracting a scene feedback data cluster to be analyzed corresponding to a scene to be analyzed, wherein the scene feedback data to be analyzed in the scene feedback data cluster to be analyzed comprises feedback action types of a scene image to be analyzed of an image application user, and the scene image to be analyzed belongs to a scene image corresponding to the scene to be analyzed;
extracting a user feedback state ordered set corresponding to the scene to be analyzed, wherein the user feedback state ordered set comprises a plurality of user feedback states which are ordered according to the user feedback state goodness, and the user feedback state ordered set is processed based on feedback action types corresponding to action type change paths of the scene to be analyzed to form the user feedback state ordered set;
Determining a reference feedback action type corresponding to the user feedback state in the user feedback state ordered set;
analyzing an image application user associated with a feedback action type and the reference feedback action type based on the scene feedback data cluster to be analyzed, and determining a candidate image application user corresponding to the user feedback state according to the image application user associated with the feedback action type and the reference feedback action type;
determining a first user feedback state in the ordered set of user feedback states, and analyzing the number of image application users corresponding to each piece of preset user characterization information in a plurality of candidate image application users corresponding to the first user feedback state; determining the identification user characterization information corresponding to the first user feedback state according to the number of image application users corresponding to the preset user characterization information; and determining candidate image application users with the identification user characterization information from candidate image application users corresponding to the second user feedback state, and marking the candidate image application users as target image application users, wherein the user feedback state quality corresponding to the second user feedback state is worse than the user feedback state quality corresponding to the first user feedback state;
And combining the scene image corresponding to the scene to be analyzed and the user image corresponding to the target image application user so as to form a corresponding user scene composite image.
8. An AR-based information interaction system, comprising:
the image merging module is used for merging the scene image corresponding to the scene to be analyzed and the user image corresponding to the target image by applying the user to form a corresponding user scene composite image;
the user association module is used for carrying out association processing on a first target image application user corresponding to the first user scene synthetic image and a second target image application user corresponding to the second user scene synthetic image under the condition that the first user scene synthetic image and the second user scene synthetic image have the same scene image so as to form a user association relationship between the target image application users;
and the information interaction module is used for carrying out target information interaction operation based on the first user scene composite image and the second user scene composite image under the condition that the first target image application user and the second target image application user have information interaction requirements so as to provide corresponding information interaction services for the first target image application user and the second target image application user.
9. The AR-based information interaction system of claim 8, wherein the information interaction module is specifically configured to:
determining whether an information interaction requirement exists between the first target image application user and the second target image application user;
and under the condition that the first target image application user and the second target image application user have information interaction requirements, performing target information interaction operation based on the first user scene composite image and the second user scene composite image so as to provide corresponding information interaction services for the first target image application user and the second target image application user.
10. The AR-based information exchange system of claim 9, wherein the performing a target information exchange operation based on the first user scene composite image and the second user scene composite image to provide corresponding information exchange services for the first target image application user and the second target image application user in a case where the first target image application user and the second target image application user have information exchange requirements comprises:
Under the condition that the first target image application user and the second target image application user have information interaction requirements, integrating first interaction information corresponding to the first target image application user and the first user scene composite image to form first integrated data;
under the condition that the first target image application user and the second target image application user have information interaction requirements, integrating second interaction information corresponding to the second target image application user and the second user scene composite image to form second integrated data;
and sending the first integrated data to the second user terminal equipment, and sending the second integrated data to the first user terminal equipment so as to realize user information interaction.
CN202311281115.8A 2023-10-07 2023-10-07 AR-based information interaction method and system Pending CN117369633A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311281115.8A CN117369633A (en) 2023-10-07 2023-10-07 AR-based information interaction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311281115.8A CN117369633A (en) 2023-10-07 2023-10-07 AR-based information interaction method and system

Publications (1)

Publication Number Publication Date
CN117369633A true CN117369633A (en) 2024-01-09

Family

ID=89406905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311281115.8A Pending CN117369633A (en) 2023-10-07 2023-10-07 AR-based information interaction method and system

Country Status (1)

Country Link
CN (1) CN117369633A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102044089A (en) * 2010-09-20 2011-05-04 董福田 Method for carrying out self-adaption simplification, gradual transmission and rapid charting on three-dimensional model
US20140089813A1 (en) * 2012-09-21 2014-03-27 Darius Vahdat Pajouh Ranking of user feedback based on user input device tracking
CN109521869A (en) * 2018-09-20 2019-03-26 太平洋未来科技(深圳)有限公司 A kind of information interacting method, device and electronic equipment
US20190237083A1 (en) * 2018-01-26 2019-08-01 Walmart Apollo, Llc System for customized interactions-related assistance
US20200341781A1 (en) * 2019-04-24 2020-10-29 Salesforce.Com, Inc. Custom user interface design based on metrics from another communication channel
CN112221139A (en) * 2020-10-22 2021-01-15 腾讯科技(深圳)有限公司 Information interaction method and device for game and computer readable storage medium
CN113741698A (en) * 2021-09-09 2021-12-03 亮风台(上海)信息科技有限公司 Method and equipment for determining and presenting target mark information
CN114332417A (en) * 2021-12-13 2022-04-12 亮风台(上海)信息科技有限公司 Method, device, storage medium and program product for multi-person scene interaction
US20220222297A1 (en) * 2021-01-14 2022-07-14 Capital One Services, Llc Generating search results based on an augmented reality session
CN115562480A (en) * 2022-09-06 2023-01-03 杭州场域未来科技有限责任公司 Method and device for augmented reality
CN116452778A (en) * 2022-01-06 2023-07-18 华为技术有限公司 Augmented reality system, method and equipment for constructing three-dimensional map by multiple devices
CN116560504A (en) * 2023-04-28 2023-08-08 张仲元 Interactive method, computer device and computer readable storage medium for performance site

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102044089A (en) * 2010-09-20 2011-05-04 董福田 Method for carrying out self-adaption simplification, gradual transmission and rapid charting on three-dimensional model
WO2012037862A1 (en) * 2010-09-20 2012-03-29 Dong futian Distributed plotting method for 3d model data and a device therefor
US20140089813A1 (en) * 2012-09-21 2014-03-27 Darius Vahdat Pajouh Ranking of user feedback based on user input device tracking
US20190237083A1 (en) * 2018-01-26 2019-08-01 Walmart Apollo, Llc System for customized interactions-related assistance
US10783476B2 (en) * 2018-01-26 2020-09-22 Walmart Apollo, Llc System for customized interactions-related assistance
CN109521869A (en) * 2018-09-20 2019-03-26 太平洋未来科技(深圳)有限公司 A kind of information interacting method, device and electronic equipment
US20200341781A1 (en) * 2019-04-24 2020-10-29 Salesforce.Com, Inc. Custom user interface design based on metrics from another communication channel
CN112221139A (en) * 2020-10-22 2021-01-15 腾讯科技(深圳)有限公司 Information interaction method and device for game and computer readable storage medium
US20220222297A1 (en) * 2021-01-14 2022-07-14 Capital One Services, Llc Generating search results based on an augmented reality session
CN113741698A (en) * 2021-09-09 2021-12-03 亮风台(上海)信息科技有限公司 Method and equipment for determining and presenting target mark information
CN114332417A (en) * 2021-12-13 2022-04-12 亮风台(上海)信息科技有限公司 Method, device, storage medium and program product for multi-person scene interaction
CN116452778A (en) * 2022-01-06 2023-07-18 华为技术有限公司 Augmented reality system, method and equipment for constructing three-dimensional map by multiple devices
CN115562480A (en) * 2022-09-06 2023-01-03 杭州场域未来科技有限责任公司 Method and device for augmented reality
CN116560504A (en) * 2023-04-28 2023-08-08 张仲元 Interactive method, computer device and computer readable storage medium for performance site

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘峰;陈媛;范伟;: "组件式虚拟实验系统面向对象的实现方法", 重庆理工大学学报(自然科学), no. 05, 15 May 2012 (2012-05-15) *
张宇奇: "知识增强的高效可靠交互式推荐系统研究", 《万方数据》, 12 January 2023 (2023-01-12) *

Similar Documents

Publication Publication Date Title
CN112330685B (en) Image segmentation model training method, image segmentation device and electronic equipment
CN112270686B (en) Image segmentation model training method, image segmentation device and electronic equipment
CN110379020B (en) Laser point cloud coloring method and device based on generation countermeasure network
CN112215171B (en) Target detection method, device, equipment and computer readable storage medium
JP2023131117A (en) Joint perception model training, joint perception method, device, and medium
CN114972016A (en) Image processing method, image processing apparatus, computer device, storage medium, and program product
CN116610745B (en) AI scene information pushing processing method and system applying digital twin technology
CN111597361A (en) Multimedia data processing method, device, storage medium and equipment
CN117252947A (en) Image processing method, image processing apparatus, computer, storage medium, and program product
CN117369633A (en) AR-based information interaction method and system
CN114491093B (en) Multimedia resource recommendation and object representation network generation method and device
CN117291852A (en) AR-based information synthesis method and system
Jafari et al. The best of bothworlds: Learning geometry-based 6d object pose estimation
CN115115699A (en) Attitude estimation method and device, related equipment and computer product
CN111787081B (en) Information processing method based on Internet of things interaction and intelligent communication and cloud computing platform
CN115761598B (en) Big data analysis method and system based on cloud service platform
JP7405448B2 (en) Image processing method and image processing system
CN115994203B (en) AI-based data annotation processing method, system and AI center
CN115661419B (en) Live-action three-dimensional augmented reality visualization method and system
CN116563674B (en) Sample image enhancement method, system, electronic device and readable storage medium
CN115658946B (en) 5G (generation of graph) internet-based monitoring data visualization method and system
CN113239943B (en) Three-dimensional component extraction and combination method and device based on component semantic graph
CN116992976A (en) Model training method, device, computer equipment and storage medium
CN116012270A (en) Image processing method and device
CN118051769A (en) Training method, training device, training equipment, training medium and training program product for feature extraction network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination