WO2018095439A1 - Procédé, appareil et support de stockage pour l'interaction d'informations - Google Patents

Procédé, appareil et support de stockage pour l'interaction d'informations Download PDF

Info

Publication number
WO2018095439A1
WO2018095439A1 PCT/CN2017/115058 CN2017115058W WO2018095439A1 WO 2018095439 A1 WO2018095439 A1 WO 2018095439A1 CN 2017115058 W CN2017115058 W CN 2017115058W WO 2018095439 A1 WO2018095439 A1 WO 2018095439A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
target object
target
facial
interaction
Prior art date
Application number
PCT/CN2017/115058
Other languages
English (en)
Chinese (zh)
Inventor
陈阳
王宇
麥偉強
陈志南
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018095439A1 publication Critical patent/WO2018095439A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • Embodiments of the present invention relate to the field of computers, and in particular, to an information interaction method, apparatus, and storage medium.
  • the social platform is an account-based social system.
  • the information interaction is usually dominated by situations where the information is not shared.
  • the information interaction between the user and the user is performed in a peer-to-peer manner.
  • the timeline information flow is viewed from time to time to discover the user feeling.
  • Information of interest thereby interacting with information based on information of interest.
  • the information interaction of the existing solution is still based on the application of the virtual account of the social platform, which is not conducive to information interaction.
  • the embodiments of the present invention provide a method, an apparatus, and a storage medium for information interaction, so as to at least solve the technical problem of complicated process of related technical information interaction.
  • an information interaction method includes: acquiring facial information of the first target object; and facial information according to the first target object Obtaining target information of the first target object, wherein the target information is used to indicate a social behavior of the first target object; and receiving interaction information that is sent by the second target object according to the target information, wherein the interaction information is used to indicate the second target object and The first target object interacts; the interaction information is published.
  • an information interaction apparatus includes one or more processors, and one or more memories storing instructions, wherein the instructions are executed by the processor, and the program unit to be executed by the processor includes: a first obtaining unit, configured to acquire the first target a second acquisition unit, configured to acquire target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate a social behavior of the first target object; and the receiving unit is configured to receive The interaction information sent by the second target object according to the target information, wherein the interaction information is used to indicate that the second target object interacts with the first target object; and the publishing unit is configured to issue the interaction information.
  • a terminal is also provided.
  • the terminal is arranged to execute program code for performing the steps in the information interaction method of the embodiment of the present invention.
  • a storage medium is also provided.
  • the storage medium is arranged to store program code for performing the steps in the information interaction method of the embodiment of the present invention.
  • the object information of the first target object is acquired according to the face information of the first target object, wherein the target information is used to indicate the social behavior of the first target object;
  • the virtual account, the interactive entry is mainly based on the face information, which simplifies the process of information interaction and achieves the purpose of information interaction, thereby realizing the technical effect of simplifying the process of information interaction, and solving the complicated technology of the process of related technical information interaction. problem.
  • FIG. 1 is a schematic diagram of a hardware environment of an information interaction method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of an information interaction method according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a method for displaying target information in a preset spatial position of a real scene according to an embodiment of the present invention
  • FIG. 4 is a flowchart of another method for displaying target information in a preset spatial position of a real scene according to facial information of a first target object according to an embodiment of the present invention
  • FIG. 5 is a flowchart of a method for displaying visible information of a first target object within a permission range in a preset spatial position according to an embodiment of the present invention
  • FIG. 6 is a flowchart of another method for displaying visible information of a first target object within a right scope in a preset spatial position according to an embodiment of the present invention
  • FIG. 7 is a flowchart of a method of transmitting a first request to a server according to an embodiment of the present invention.
  • FIG. 8 is a flowchart of another method for information interaction according to an embodiment of the present invention.
  • FIG. 10 is a flowchart of a method for information registration according to an embodiment of the present invention.
  • FIG. 11 is a flowchart of a method for displaying and interacting information according to an embodiment of the present invention.
  • FIG. 12 is a schematic diagram showing a basic information display according to an embodiment of the present invention.
  • FIG. 13 is a schematic diagram showing another basic information display according to an embodiment of the present invention.
  • FIG. 14 is a schematic diagram of an AR information display according to an embodiment of the present invention.
  • FIG. 15 is a schematic diagram of another AR information display according to an embodiment of the present invention.
  • FIG. 16 is a schematic diagram of an information interaction apparatus according to an embodiment of the present invention.
  • FIG. 17 is a schematic diagram of another information interaction apparatus according to an embodiment of the present invention.
  • FIG. 18 is a structural block diagram of a terminal according to an embodiment of the present invention.
  • an embodiment of an information interaction method is provided.
  • FIG. 1 is a schematic diagram of a hardware environment of an information interaction method according to an embodiment of the present invention.
  • the server 102 is connected to the terminal 104 through a network.
  • the network includes but is not limited to a wide area network, a metropolitan area network, or a local area network.
  • the terminal 104 is not limited to a PC, a mobile phone, a tablet, or the like.
  • the information interaction method in the embodiment of the present invention may be executed by the server 102, or may be executed by the terminal 104, and may also be performed. It is executed by the server 102 and the terminal 104 in common.
  • the information interaction method performed by the terminal 104 in the embodiment of the present invention may also be performed by a client installed thereon.
  • the information interaction method may include the following steps:
  • Step S202 Acquire face information of the first target object.
  • Augmented Reality is a real-time calculation of the position and angle of the camera image, and corresponding image, video, three-dimensional (3D) model Technology to enable real-time interaction between virtual and real-world scenarios.
  • Augmented reality applications use AR technology and can be installed and used on AR glasses, mobile communication terminals, and PCs.
  • acquiring the facial information of the first target object where the first target object is an object to be information exchanged, for example, in a meeting scene, an encounter scene, a classmate, a friend, a colleague, Family and other objects.
  • the facial information may be facial information collected by the camera, for example, facial information obtained by face information automatically obtained by face recognition by the front camera, and may replace the traditional virtual account for social behavior, so that the entrance of the information interaction is based on the face. Identification of information.
  • the face information identifying the first target object is automatically triggered.
  • the user when logging in to the augmented reality application, the user may log in through the user's palm print information, user name, and facial information, which is not limited herein.
  • Step S204 acquiring target information of the first target object according to the facial information of the first target object.
  • the target information of the first target object is acquired according to the facial information of the first target object, wherein the target information is used to indicate the social behavior of the first target object.
  • the facial information of the first target object is in one-to-one correspondence with the target information of the first target object, the target information is used to indicate the social behavior of the first target object, and may be further used as the second target object.
  • the prompt information of the first target object is understood, wherein the second target object is an object that interacts with the first target object according to the target information.
  • the first target object can be registered on the server through the face information. After acquiring the face information of the first target object, the target information of the first target object is acquired from the server according to the face information of the first target object.
  • the target information includes user basic information and social information of the first target object.
  • the user basic information may include basic information such as a nickname, a name, an address, a contact method, and a personalized signature of the first target object.
  • the social information includes dynamic information of the first target object, extended information of the first target object on the third-party platform, historical exchange information of the first target object, and the like.
  • the dynamic information of the first target object may be dynamic timeline information, including but not limited to expressions and comments.
  • the expression refers to a single static or dynamic or three-dimensional preset image without text, and the comment is rich media, which may include text, Voice, picture, and other information freely organized by users; extended information includes third-party social account information, which can be extracted on a third-party social platform by pulling the first target object according to the third-party social account information according to the network address characteristics of the third-party social platform.
  • Information; historical exchange information is information that communicates with the first target object in the past time, and can be used to evoke the communication memory of the second target object to the first target object, thereby naturally making the second target object and the first target object more natural. Start the conversation topic.
  • the target information of the first target object When the target information of the first target object is acquired according to the facial information of the first target object, the target information may be displayed at a preset spatial position of the real scene, that is, the target information is superimposed to a preset spatial position of the real scene, for example, Superimposed to one side of the first target object, thereby achieving the purpose of combining the virtual target information with the real real scene, and avoiding the manual opening of the social software to search for the dynamic information and search of the first target object by acquiring the target information History exchanges information, which simplifies the process of information interaction.
  • the target information of the first target object is automatically displayed after the face information of the first target object is automatically triggered.
  • the camera is not easy to acquire the face information of the first target object, and the target can be obtained by voice search.
  • Information for example, by voice search for basic information such as nicknames, names, etc. to obtain target information.
  • voice search for basic information such as nicknames, names, etc. to obtain target information.
  • the second target object and the first target object do not meet in the real scene, but want to view the social information of the first target object, for example, want to view historical exchange information with the first target object, this The face information of the first target object cannot be obtained at the time, and can be performed by the above-mentioned voice search.
  • Step S206 receiving interaction information that is sent by the second target object according to the target information.
  • step S206 of the present invention the interaction information sent by the second target object according to the target information is received, wherein the interaction information is used to indicate that the second target object interacts with the first target object.
  • the second target object After acquiring the target information of the first target object according to the facial information of the first target object, the second target object has a further understanding of the first target object by the target information of the first target object.
  • the second target object performs information interaction with the first target object according to the actual intention thereof, and receives the interaction information sent by the second target object according to the target information, so that the first target object and the second target object perform information interaction.
  • the interaction information may be information related to the content of the target information, or may be information that is not related to the content of the target information.
  • the second target object learns the first target by using the target information of the first target object.
  • the object likes the soccer ball, and the second target object may send the interactive information inviting the first target object to watch the soccer match, or may send the interactive information inviting the other party to watch the basketball game in order to let the first target object feel the new ball game experience.
  • the interaction information may be voice information, image information, video information, etc., and may be virtual interaction information in a virtual scene, including but not limited to expressions and comments, such as, but not limited to, text manually input by the second target object.
  • the interaction information may also be voice information, image information, video messages, etc. recorded in a real scene, which is not limited herein, so that all interactions between the virtual world and the real world are realized, and the virtual coexistence of information interaction is achieved, thereby enriching The type of information interaction.
  • step S208 the interaction information is released.
  • step S208 of the present invention after receiving the interaction information sent by the second target object according to the target information, the interaction information is released, and the first target object and the second target object can view the interaction information through the client, thereby The first target object is caused to interact with the second target object.
  • the publishing portal mainly includes a personal dynamic information portal, and a session information portal with others.
  • the former can control the authority of the publication, and the latter includes the entry of the interaction information performed by the two parties in the virtual scene and the entry of the interaction information in the real scene.
  • the permission control is divided into at least four categories, for example, all people are visible, friends are visible, specific friends are visible, and only visible to themselves. People can set different requirements for the degree of information disclosure, and can use the widest visible control permission to be seen by people. The privacy concern can be set to be visible only to friends, thus preventing unfamiliar people from peeking into their own information. Improve the security of user information.
  • the first target information of the first target object whether it is basic user information, dynamic information, or a manner of displaying interaction information between the first target object and the second target object, including but not limited to a three-dimensional spiral , spherical, cylindrical and other presentation methods, thereby increasing the interest of interactive information display.
  • step S208 the facial information of the first target object; acquiring the target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate the social behavior of the first target object; The interaction information sent by the second target object according to the target information, wherein the interaction information is used to indicate that the second target object interacts with the first target object; and the interaction information is released.
  • the virtual account and interactive portal are mainly based on facial information, which simplifies the process of information interaction, solves the complicated technical problems of the process of related technical information interaction, and further achieves the technical effect of simplifying the interactive process of information.
  • receiving the interaction information that is sent by the second target object according to the target information includes: receiving, in the reality, the second target object is sent according to the target information. Real interaction information in the scenario; and/or receiving virtual interaction information in the virtual scenario sent by the second target object according to the target information.
  • the real interaction information between the second target object and the first target object is recorded, thereby realizing the record of the real world.
  • the AR content is used to input the content obtained by the user, such as the image content and the video content in the real scene, without having to go back and forth between the screen and the reality when recording the image content and the video content in the real scene like the mobile phone platform. Switch attention.
  • the virtual interaction information in the virtual scenario sent by the second target object according to the target information is received, and the virtual interaction information is the exchange information of the virtual world, and may be a single static or dynamic or three-dimensional preset without text.
  • Pictures can also be texts, voices, pictures, and other information that are freely organized by users.
  • the virtual interaction information is stored to the preset storage location.
  • the real interaction information After receiving the real interaction information in the real scene sent by the second target object according to the target information, the real interaction information is stored to a preset storage location, for example, stored in the server, so as to be included in the next acquired target information.
  • the real interactive information optionally, after the real interactive information is entered through the AR glasses, the recorded image content, the video content, and the like can be played back without using other platforms, and the user experience is the angle of view that was originally entered, thereby bringing Users have a more realistic experience. And/or after receiving the virtual interaction information in the virtual scenario sent by the second target object according to the target information, storing the virtual interaction information to a preset storage location, for example, storing in the server, so that the target acquired next time The information includes this virtual interaction information.
  • the real interaction information includes at least one or more of the following: voice information in a real scene; image information in a real scene; and video information in a real scene.
  • the real interaction information in the real scene sent by the second target object according to the target information includes the voice information in the real scene, for example, the conversation between the second target object and the first target object, and the image information in the real scene.
  • the facial image of the first target object further includes video information in a real scene, such as a video recording of a meeting in a conference room, thereby enriching the type of interactive information.
  • step S202 acquiring the facial information of the first target object includes: scanning a face of the first target object to obtain facial information of the first target object; and in step S204, according to the face of the first target object After the information acquires the target information of the first target object, the method further includes: displaying the target information in a preset spatial position of the real scene.
  • the face information of the first target object may be obtained by scanning the face of the first target object, for example, the front camera installed by the AR glasses automatically performs the face of the first target object. Identifying, obtaining face information of the first target object, thereby achieving the purpose of acquiring face information of the first target object.
  • displaying the target information in a preset spatial position of the real scene for example, displaying the target information on one side of the first target object, and the user passes the AR device
  • the target information displayed in the preset spatial position, the first target object, and other scenes in the real scene can be seen.
  • the device having the camera in theory can be applied to the face information of the first target object in the embodiment, including but not limited to the AR glasses device, and can also be different for the mobile communication terminal, the PC end, and the like. It's ease of use and how it works.
  • displaying the target information in the preset spatial position of the real scene includes: determining a display space position of the target information in the real scene according to the current spatial position of the first target object in the real scene; The location displays the target information.
  • FIG. 3 is a flowchart of a method for displaying target information in a preset spatial position of a real scene according to an embodiment of the present invention. As shown in FIG. 3, the method for displaying target information in a preset spatial position of a real scene includes the following steps:
  • Step S301 determining a current spatial location of the first target object in the real scene.
  • step S301 of the present invention after acquiring the first target information of the first target object, determining a current spatial location of the first target object in the real scene, where the current spatial location may be the first target object The position of the face in the real scene.
  • the current location of the first target object in the real scene is determined by information such as a distance from the second target object, a direction relative to the second target object, and the like.
  • Step S302 determining a display space position of the target information in the real scene according to the current spatial position.
  • step S302 of the present invention after determining the current spatial position of the first target object in the real scene, determining the display space position of the target information in the real scene according to the current spatial position, it may be determined that the display spatial position is located.
  • the left, right, top, bottom, etc. of the current spatial position can also be manually set according to the current spatial position, so as to achieve a superimposed effect of the display position of the target information and the real scene.
  • step S303 the target information is displayed in the display space position.
  • displaying the target information in the display space position may be in an automatically floating form in the first target object.
  • the target information is displayed on the side, and the target information is displayed on the side of the first target object in a bouncing form or a fade-in form, which is not limited herein, thereby improving the interest of the information interaction.
  • the embodiment determines the current spatial position of the first target object in the real scene; determines the display spatial position of the target information in the real scene according to the current spatial position; displays the target information in the display spatial position, and implements the target object according to the first target object
  • the purpose of displaying the target information in the preset spatial position of the real scene is to simplify the process of information interaction.
  • displaying the target information in the display space location includes at least one or more of the following: displaying the user of the first target object in the first display space position when the target information includes the user profile information Informational information; in the target information including personal dynamics And displaying the personal dynamic information of the first target object in the second display space position; displaying the extended information of the first target object in the third display space position when the target information includes the extended information; and when the target information includes the historical interaction information And displaying, in the fourth display space display position, historical interaction information generated during the historical interaction between the second target object and the first target object.
  • the target information includes user profile information, which is basic information of the first target object, for example, basic information such as a nickname, a name, an address, a contact information, and a personalized signature of the first target object.
  • user profile information is displayed at the first display space location.
  • the user profile information of the first target object is superimposed on the side of the face of the first target object, and the user can not only see the target information of the first display space location through the AR glasses, but also can view other scenes in the real scene. , thus achieving the combination of the virtual world and the real world.
  • the target information may further include personal dynamic information that displays personal dynamic information of the first target object at the second display space location.
  • personal dynamic information displays personal dynamic information of the first target object at the second display space location.
  • the display instruction includes The voice command, the instruction command generated by the user clicking through the gesture, and the instruction command generated by the user by gazing.
  • the personal dynamic may be sequentially displayed in the order of the time axis, or in a bounce manner, Progressive form display, not limited here.
  • personal dynamic information is one of the entrances of information interaction.
  • the target information may further include extended information that displays the extended information of the first target object at the third display space position.
  • the extended information includes the third-party social account information of the first target object, and the information published by the first target object may be pulled according to the third-party social account information according to the network address characteristics of the third-party social platform.
  • the target information may further include historical interaction information, and the historical interaction information generated during the historical interaction between the second target object and the first target object is displayed in the fourth display space display position, and the historical interaction information may be picture information, voice information, and text.
  • Information, video information, etc., historical communication is a message session, which is one of the information exchange entries, and records the exchange information between the virtual scene and the real scene.
  • the target information of the embodiment is virtual content superimposed in the real world, realizing the virtual and real combination of the interaction information, thereby bringing a more realistic interactive experience to the user.
  • step S204 displaying the target information in the preset spatial position of the real scene according to the facial information of the first target object includes: determining whether the server is in the case of scanning the face of the first target object And storing facial feature data matching the facial information of the first target object; and determining whether the facial scanning data of the first target object matches the facial feature data of the first target object; To allow scanning, that is, to determine whether the scanning permission of the account corresponding to the facial feature data is allowed to scan; if it is determined that the facial scanning permission of the first target object is to allow scanning, visible information is displayed in the preset spatial position, wherein The information includes at least user profile information of the first target object.
  • FIG. 4 is a flowchart of another method for displaying target information in a preset spatial position of a real scene according to face information of a first target object according to an embodiment of the present invention.
  • the method for displaying target information according to the facial information of the first target object in a preset spatial position of the real scene includes the following steps:
  • step S401 the face is scanned.
  • the information display is to scan the face as the main entrance scene.
  • a face scan is performed to determine whether there is a face.
  • the faces of the plurality of target objects include faces of the first target object. If the face is not scanned, the scanning of the face is continued to determine whether or not the face of the other subject is scanned. If the face of the object is scanned, it is judged whether the face data matching the face information of the scanned object is stored in the server, and if it is determined that the face data matching the face information of the scanned object is not stored in the server, the process continues.
  • the server performs a scan to determine if you are scanning the faces of other objects. If it is determined that the server stores face data matching the face information of the scanned object, further determining the Whether the face scanning permission of the object is to allow the visible information of the object to be displayed within the scope of the permission after scanning the face of the object, and if it is determined that the face scanning permission of the object is not allowed to display the object after the face of the object is displayed, the permission is Visible information within the range, continue to perform a scan to determine whether to scan the faces of other objects, and so on.
  • Step S402 it is determined whether the facial feature data matching the facial information of the first target object is stored in the server.
  • step S402 of the present invention in the case of scanning the face of the first target object, it is determined whether the face feature data matching the face information of the first target object is stored in the server.
  • the first target object stores the facial feature data of the first target object if the information is registered in the augmented reality application.
  • the face information of the first target object is acquired, and the face information of the first target object may be composed of face data having preset features. It is judged whether or not the face feature data matching the face information of the first target object is stored in the server.
  • the matching of the facial information and the facial feature data is a matching of the degree of coincidence or similarity of the data in the facial information with the facial feature data within a preset threshold, for example, if the data in the facial information coincides with the facial feature data. If the degree or the degree of similarity reaches 80% or more, it is determined that the face information matches the face feature data, that is, the face feature data that matches the face information of the first target object is stored in the server.
  • step S401 is performed to continue scanning the faces of the objects other than the first target object.
  • Step S403 determining whether the face scan permission of the first target object is an allowable scan.
  • step S403 of the present invention if it is determined that the face feature data matching the face information of the first target object is stored in the server, it is determined whether the face scan permission of the first target object is the allowable scan.
  • the face scan permission of the first target object is used to indicate the face of the first target object
  • the extent of the scan including allowing all objects to scan the face of the first target object through the augmented reality application, ie, all objects can be scanned.
  • the preset object is allowed to scan the face of the first target object through the augmented reality application, that is, only the object scan is preset. Any object is prohibited from scanning the face of the first target object through the augmented reality application, that is, scanning is prohibited, wherein the preset object may be a friend.
  • the face scan authority of the first target object is determined when the first target object requesting server stores the face feature data. It is determined whether the face scan permission of the first target object is the allowable scan, and if it is determined that the face scan permission of the first target object is the allowable scan, step S404 is performed.
  • step S401 is performed to continue scanning other than the first target object.
  • the face of other objects is determined that the face scan permission of the first target object is that the second target object is not allowed to scan the face of the first target object by using the augmented reality application.
  • Step S404 displaying visible information of the first target object within the permission range in the preset spatial position.
  • step S404 of the present invention if it is determined that the face scan permission of the first target object is to allow scanning, the visible information of the first target object within the permission range is displayed in the preset space position, wherein the visible information is at least The user profile information of the first target object is included.
  • the visible information of the first target object within the scope of authority may include user profile information, extension information, and dynamic information of the first target object within the scope of authority.
  • the user profile information and the extended information in the scope of the authority are determined when the first target object registers the information with the server, wherein the permission control of each item of the user profile information and the extended information may be classified into at least three categories, respectively All objects are visible through the augmented reality application, allowing only the preset objects to be visible through the augmented reality application, and only visible through the augmented reality application itself.
  • the control rights of dynamic information are determined at the time of dynamic information release.
  • the control rights can include four categories, which allow all objects to be visible through the augmented reality application, allow friends to be visible through the augmented reality application, allow specific friends to be visible through the augmented reality application, and only pass by themselves. Augmented reality apps are visible. After determining whether the face scan permission of the first target object is an allowable scan, if it is determined that the face scan permission of the first target object is If the scan is performed, the user profile information, extension information, and dynamic information within the permission range can be displayed in the preset space position. Among them, dynamic information is one of the entrances of information interaction, including but not limited to expressions and comments. Another major information interaction portal is a message session, which records the exchange information between the virtual scene and the real scene.
  • the embodiment scans the face; in the case of scanning the face of the first target object, it is determined whether the face feature data matching the face information of the first target object is stored in the server; if it is determined that the server stores the first target.
  • the facial feature data matching the facial information of the object is determined whether the facial scanning permission of the first target object is allowed to be scanned; if it is determined that the facial scanning permission of the first target object is allowed to be scanned, the visible information is displayed in the preset spatial position,
  • the visible information includes at least the user profile information of the first target object, and the purpose of displaying the target information according to the facial information of the first target object in a preset spatial position of the real scene is implemented, thereby simplifying the process of information interaction.
  • the visible information includes extended information of the first target object
  • step S404, displaying the visible information within the permission range in the preset spatial location includes: the first target object has account information of the third-party platform.
  • the first display instruction for indicating the extended content corresponding to the account information is displayed, and the extended content within the permission range is displayed in the preset space position.
  • FIG. 5 is a flowchart of a method for displaying visible information of a first target object within a right range at a preset spatial position, according to an embodiment of the present invention. As shown in FIG. 5, the method for displaying visible information of the first target object within the permission range in the preset spatial position comprises the following steps:
  • Step S501 determining whether the first target object has account information of a third-party platform.
  • the extended information includes account information.
  • the permission visible information is displayed, when it is determined that the face scan permission of the first target object is the allowable scan, the extended information of the first target object within the permission range is allowed to be displayed after scanning the face of the first target object, and the extended information includes Account information of the third-party platform of the first target object.
  • the second target object can obtain the first target object through the account information of the third platform on the third party platform. Published content.
  • the visible information within the scope of the display permission it is determined whether the first target object has account information of the third-party platform.
  • Step S502 receiving a first display instruction for indicating that the extended content corresponding to the account information is displayed.
  • step S502 of the present invention if it is determined that the first target object has the account information of the third-party platform, the first display instruction for indicating the extended content corresponding to the account information is received.
  • the icon of the third-party platform that can pull the content can also be marked in the preset space position, and can be displayed at the bottom of the display position of the user profile information.
  • Step S503 displaying the extended content within the permission range at the preset spatial location.
  • step S503 of the present invention after receiving the first display instruction, the extended content within the permission range is displayed in the preset spatial position.
  • the preset space location After receiving the first display instruction for indicating the extended content corresponding to the account information, the preset space location displays the extended content within the permission scope, and can switch to the timeline information flow on the third-party platform, thereby obtaining rich information.
  • the embodiment determines whether the first target object has the account information of the third-party platform, wherein the extended information includes the account information; if it is determined that the first target object has the account information of the third-party platform, the receiving is used to indicate that the display corresponds to the account information.
  • the first display instruction of the extended content after receiving the first display instruction, displaying the extended content within the permission range in the preset spatial position, achieving the purpose of displaying the visible information in the preset spatial position.
  • the visible information includes personal dynamic information of the first target object, and step S404, displaying the visible information within the permission range in the preset spatial location comprises: receiving a second display instruction for indicating the display of the personal dynamic information; and receiving the second display instruction After that, the right is displayed in the preset space. Personal dynamic information within the limits.
  • FIG. 6 is a flow chart of another method of displaying visible information of a first target object within a right range at a preset spatial location, in accordance with an embodiment of the present invention.
  • the method for displaying the visible information of the first target object within the permission range in the preset spatial position comprises the following steps:
  • Step S601 receiving a second display instruction for indicating the display of personal dynamic information.
  • a second display instruction for indicating the display of personal dynamic information is received.
  • the display permission visible information when it is determined that the face scan permission of the first target object is allowed to scan, the personal dynamic information of the first target object within the permission range is allowed to be displayed after scanning the face of the first target object, and can be received.
  • a second display instruction indicating the display of the personal dynamic information the second display instruction includes a voice instruction, an instruction instruction generated by the user by clicking the gesture, an instruction instruction generated by the user by gazing pause, and the like, thereby performing the downward rotation according to the second display instruction. Or click on the action of the personal dynamic info icon.
  • Step S602 displaying personal dynamic information within the scope of authority in a preset spatial location.
  • step S602 of the present invention after receiving the second display instruction, the personal dynamic information within the permission range is displayed in the preset spatial position.
  • the personal dynamic information within the scope of the authority may be displayed on the basis of the display position of the user profile information.
  • the embodiment receives the second display instruction for indicating the display of the personal dynamic information; after receiving the second display instruction, displaying the personal dynamic information in the preset spatial position, thereby realizing that the preset spatial position is displayed within the permission range.
  • the purpose of the information is visible, which simplifies the process of information interaction.
  • the method comprising: sending a first request to the server, where the first request carries the first The facial feature data of the target object matches the facial feature data, the server responds to the first request, and stores the facial feature data of the first target object, and may also send a second request to the server, where the second request carries the user of the first target object Information, service Responding to the second request and storing user profile information of the first target object; and/or transmitting a third request to the server, wherein the third request carries extended information of the first target object, the server responds to the third request, and stores Extended information of the first target object.
  • the first target object Before acquiring the face information of the first target object, the first target object passes the server registration information, and the registered information includes the face information of the first target object.
  • the facial image information of the first target object needs to be acquired in real time, and the authenticity verification is performed on the facial image information, including but not limited to verifying the unmanned face and using the facial information to prompt the first
  • the target object performs a specified facial action in real time, determines whether the actual facial action made by the first target object matches the facial action for verifying authenticity, and the actual facial action made by the first target object matches the facial action for verifying authenticity. By detecting whether the face information is in a three-dimensional form, the counterfeiting registration behavior is further eliminated.
  • the rights control can be set to allow everyone to scan, allow friends to scan, and prohibit scanning.
  • the registered information may further include user profile information of the first target object, including but not limited to the nickname, name, address, contact information, signature, and the like of the first target object.
  • the registered information may also include extended information of the first target object.
  • the extended information includes the third-party social account information provided by the user.
  • the social platform of this embodiment can pull the information it publishes if the user account is known. Provides aggregated third-party social platform information pull capability for scanners to obtain richer inventory information.
  • the user profile information and the extension information of this embodiment can select the degree of information disclosure at the time of registration according to the user's own wishes.
  • the control granularity of each piece of information can be divided into at least three categories, allowing everyone to be visible, allowing only friends to be visible, and only visible to themselves. For example, age permission control, phone number permission control, address information permission control, etc. You can set them one by one according to your needs.
  • the embodiment does not limit the type of the client, and can be registered through the AR glasses, or can be registered through the mobile communication terminal, or can be registered through the PC segment, which is not limited herein. .
  • sending the first request to the server includes: when the face of the first target object is detected, issuing an instruction instruction for instructing the first target object to perform the preset facial action, at the first
  • the target object detects whether the face of the first target object is in a three-dimensional form according to the actual facial action preset facial motion performed by the instruction instruction; and when the face of the first target object is detected as a three-dimensional shape, the first target is acquired.
  • the facial feature data of the object; the first request is sent to the server based on the facial feature data.
  • FIG. 7 is a flow chart of a method of transmitting a first request to a server, in accordance with an embodiment of the present invention. As shown in FIG. 7, the method for transmitting a first request to a server includes the following steps:
  • Step S701 detecting a face.
  • step S701 of the present invention the face is detected.
  • This embodiment records face information in real time.
  • there are a plurality of objects the plurality of objects including the first target object.
  • the face image data of the first target object is detected before the face information of the first target object is acquired, and the face image data of the second target object may be detected by the front camera.
  • the user takes a self-portrait face shot in real time, and the system verifies the authenticity of the received facial image data.
  • the face detection algorithm of this embodiment does not specify a specific method, including but not limited to traditional algorithms such as feature recognition, template recognition, and neural network recognition, and a Gaussian Face-based face recognition (Gaussian Face) algorithm.
  • traditional algorithms such as feature recognition, template recognition, and neural network recognition, and a Gaussian Face-based face recognition (Gaussian Face) algorithm.
  • the face is continuously detected.
  • Step S702 an instruction instruction for instructing the first target object to perform a preset facial action is issued.
  • step S702 of the present invention in the case that the face of the first target object is detected, an instruction instruction for instructing the first target object to perform the preset facial action is issued, wherein the first target object is in accordance with the indication
  • the command performs a facial action to obtain an actual facial action.
  • the first target object is prompted to perform the specified face action in real time, and a voice instruction instruction for instructing the first target object to perform the preset face action may be issued, and the voice command may be raised in real time according to the voice instruction.
  • Preset facial movements such as bowing, slightly turning left, slightly turning right, frowning, opening mouth, and blinking.
  • step S703 it is determined whether the actual facial motion matches the preset facial motion.
  • step S703 of the present invention it is determined whether the actual facial motion matches the preset facial motion.
  • step S701 After an instruction to instruct the first target object to perform the preset facial motion is issued, it is determined whether the actual facial motion matches the preset facial motion. If it is determined that the actual facial motion does not match the preset facial motion, step S701 is performed again to continue detecting the facial. If it is determined that the actual facial motion matches the preset facial motion, step S704 is performed. Thereby, the authenticity of the received image information is determined by whether the actual facial motion and the preset facial motion match.
  • Step S704 detecting whether the face of the first target object is in a three-dimensional form.
  • step S704 of the present invention if it is determined that the actual facial action matches the preset facial motion, it is detected whether the face of the first target object is in a three-dimensional form.
  • the face After determining whether the actual facial action matches the preset facial motion, if it is determined that the actual facial motion matches the preset facial motion, detecting whether the facial of the first target object is a three-dimensional shape, that is, for the first target object
  • the face performs face depth information detection.
  • the camouflage method can be to play a previously prepared face image on the terminal screen. In order to deceive the registration system.
  • Step S705 if it is detected that the face of the first target object is in a three-dimensional form, the facial feature data of the first target object is acquired.
  • step S705 of the present invention in a case where the face of the first target object is detected to be in a three-dimensional form, the facial feature data of the first target object is acquired.
  • the face feature data matching the face information of the first target object is acquired, and the face information of the first target object allows an error of a preset threshold value with the face feature data.
  • Step S706 sending a first request to the server according to the facial feature data.
  • the first request is sent to the server according to the facial feature data, and the first request carries facial feature data that matches the facial information of the first target object, and the server responds to the first request and stores Facial feature data of the first target object.
  • step S204 acquiring the target information of the first target object according to the facial information of the first target object includes: requesting, by the facial information information of the first target object, the server to deliver the target information according to the facial feature data; and receiving the target information.
  • the target information After acquiring the face information of the first target object, sending a request for matching the face information to the server according to the face information, the server responding to the request, and searching for facial feature data of the first target object in the facial feature database, After the server finds the face data of the first target object, the target information is delivered.
  • the embodiment detects the face; in the case where the face of the first target object is detected, an instruction instruction for instructing the first target object to perform the preset face action is issued; determining whether the actual face action matches the preset face action; If it is determined that the actual facial motion matches the preset facial motion, detecting whether the facial surface of the first target object is a three-dimensional shape; and when detecting that the facial surface of the first target object is a three-dimensional shape, acquiring facial features of the first target object Data; transmitting a first request to the server based on the facial feature data, achieving the purpose of the server storing facial feature data for matching facial information of the first target object.
  • the search information for indicating the search target information is received, where the user
  • the profile information includes search information; the target information is obtained based on the search information.
  • FIG. 8 is a flowchart of another method of information interaction according to an embodiment of the present invention. As shown in FIG. 8, the information interaction method further includes the following steps:
  • Step S801 receiving search information for indicating search target information.
  • the search information for indicating the search target information is received, wherein the user profile information includes the search information.
  • the information display is to obtain the facial information of the first target object as the main entrance scene, and the search information for indicating the search target information is supplemented, and the search information may be the user information information such as the voice search nickname and name.
  • Obtaining the face information of the first target object may be applied to the face visible scene, and the search information for indicating the search target information may be applied in a scene in which the face information cannot be acquired or accurately acquired.
  • Step S802 acquiring target information according to the search information.
  • the target information is acquired according to the search information.
  • the target information After receiving the search information for indicating the search target information, the target information is acquired according to the search information, and the target information of the first target object may be acquired according to the nickname, name, and the like of the first target object.
  • the embodiment receives search information for indicating search target information in a case where the face of the first target object is not visible before receiving the interaction information transmitted by the second target object according to the target information, the user profile information including the search information;
  • the target information is obtained according to the search information, thereby realizing the acquisition of the target information and simplifying the process of information interaction.
  • FIG. 9 is a flow chart of another method of information interaction according to an embodiment of the present invention. As shown in FIG. 9, the information interaction method further includes the following steps:
  • Step S901 identifying a facial contour of the first target object according to the facial information of the first target object.
  • the facial contour of the first target object is identified according to the facial information of the first target object.
  • the facial contour of the first target object is identified according to the facial information of the first target object, and the facial contour of the first target object may be identified by the AR glasses according to the facial information of the first target object.
  • Step S902 adding static and/or dynamic three-dimensional image information at a preset position of the facial contour.
  • step S902 of the present invention static and/or dynamic three-dimensional image information is added at a preset position of the facial contour.
  • the three-dimensional image information may be a three-dimensional decoration, and a static or dynamic three-dimensional decoration is added to the recognized face contour by the AR glasses.
  • the embodiment identifies the facial contour of the first target object according to the facial information of the first target object after acquiring the facial information of the first target object; adding static and/or dynamic three-dimensional image information at the preset position of the facial contour, Thereby enhancing the interest of information interaction.
  • the publishing interaction information includes at least one of the following: releasing the interaction information in the form of a voice; and releasing the interaction information in the form of a picture, where the interaction information in the form of a picture includes the interaction information in the form of a panoramic picture; Publish interactive information in the form of video; publish interactive information of the 3D model.
  • the interaction information generated by this embodiment depends on the hardware used.
  • the relatively intuitive and fast content mainly includes interactive information in the form of voice, interactive information in the form of pictures, and interactive information in the form of video. Also includes a panoramic image in the form of AR device capabilities. Interaction information between interaction information and 3D models.
  • Embodiments of the invention are preferably applicable to AR glasses devices that have a front camera.
  • the embodiment of the present invention is not limited to the AR glasses device, and may also be a mobile communication terminal and a PC terminal, and is theoretically applicable to devices having a camera, and the difference is the ease of use and the interactive operation mode.
  • the embodiment of the invention further provides an augmented reality social system, which mainly comprises a registration module, an information display and interaction module, and an information generation and release module.
  • the registration module included in the embodiment of the present invention provides user information including a real face
  • the information display and interaction module provides an AR information display and an interactive portal after the face is recognized
  • the information generation and distribution module focuses on the dynamic generation of the user itself.
  • FIG. 10 is a flowchart of a method of information registration according to an embodiment of the present invention. As shown in FIG. 10, the method for registering information includes the following steps:
  • step S1001 basic information is entered.
  • the information registered by the user in the system includes basic information, face information, and extended information.
  • the basic information is similar to the existing platforms, including but not limited to nickname, name, gender, address, contact information, signature, etc.
  • Step S1002 detecting a face.
  • the face information is the key information of the system.
  • the user needs to take a self-photograph of the face in real time, and the system will verify the authenticity of the received facial image information.
  • the verification process includes, but is not limited to, verifying that there is a face without using a face detection algorithm. If a face is detected, step S1003 is performed, and if no face is detected, the step is continued to detect the face.
  • the face detection algorithm of this embodiment does not specify a specific method, including but not limited to a traditional algorithm such as feature recognition, template recognition, neural network recognition, and a Gaussian Face algorithm.
  • Step S1003 instructing the user to make a specified facial action in real time.
  • the system prompts the user to perform a specified facial motion in real time, and the user makes an actual facial motion according to the system prompt.
  • step S1004 it is determined whether the actual facial action made by the user matches the specified facial motion.
  • step S1005 is performed. If it is determined that the actual facial motion made by the user does not match the specified facial motion, step S1002 is performed to detect the facial faces of other users.
  • step S1005 face depth information detection is performed.
  • the facial depth information detection is performed.
  • step S1006 it is determined whether the detected facial image information is in a three-dimensional form.
  • the depth camera information of the AR glasses can be used to detect whether the face is in a three-dimensional form, thereby eliminating the currently known method of camouflaging facial image information, for example, playing pre-prepared face information on a screen such as a mobile communication terminal, and passing the face information.
  • the mobile video is used to trick the registration system.
  • step S1007 the requesting server stores the face image information, and uses the face image information as the face feature data.
  • the requesting server After determining whether the detected facial image information is in a three-dimensional form, if it is determined that the detected facial image information is in a three-dimensional form, the requesting server stores the facial image information as the facial feature data and stores it in the facial feature database, thereby completing The registration process of the face information after the entry of the basic information.
  • the extended information includes the third-party social account information provided by the user, and the third-party social account information can be used to pull the information posted by the user on the third-party social platform.
  • the system provides the ability to aggregate the information of third-party social platform information, so that the scanner can obtain richer stock information.
  • the information content registered in this embodiment can be selected according to the will of the user, and the degree of information disclosure can be realized by the authority control.
  • the granularity of control of each of the basic information and the extended information can be classified into at least three categories, including allowing everyone to be visible, only friends to be visible, and only visible to themselves.
  • the authority control of the face information itself at least it can be divided into three categories, all of which can be scanned, only friends can scan, and scanning is prohibited.
  • FIG. 11 is a flow chart of a method for displaying and interacting information according to an embodiment of the present invention. As shown in FIG. 11, the method for displaying and interacting with the information includes the following steps:
  • Step S1101 face scanning.
  • This embodiment can detect a human face by a camera, for example, detecting a human face through a front camera of the AR glasses.
  • step S1102 whether a face is detected.
  • step S1103 if a face is detected, step S1103 is performed, and if no face is detected, step S1101 is performed to continue face scanning.
  • step S1103 it is determined whether there is facial feature data in the system.
  • step S1101 is performed to detect the faces of other users. If it is determined that there is facial feature data corresponding to the detected face image information in the system, step S1104 is performed.
  • step S1104 it is determined whether there is a face-sweeping authority.
  • step S1105 is performed. If judged If the face is not scanned, the process proceeds to step S1101 to continue scanning the faces of other users.
  • step S1105 the visible information of the permission is displayed.
  • the display-right-visible information is displayed, wherein the rights-visible information includes basic information and dynamic timeline information, wherein the latter is one of the interactive entries, including but not limited to Expressions and comments.
  • Another major interaction entry is a message session that records the exchange of information between the virtual and the reality.
  • step S1106 it is determined whether there is third-party platform account information.
  • Step S1107 displaying a platform icon.
  • the platform icon is displayed.
  • Step S1108 Receive indication information indicating that the platform icon is expanded.
  • the indication information includes a voice instruction, an instruction instruction generated by the user through the gesture click, an instruction instruction generated by the user by the gaze pause, and the like. If it is determined that the instruction information for instructing to expand the platform icon is received, step S1109 is performed.
  • Step S1109 the user information flow of the platform is presented.
  • the user information flow of the platform is presented, thereby realizing information display and interaction.
  • the information display of the embodiment is mainly based on scanning a human face, supplemented by a voice search nickname, a name, etc., and is applied to a visible and invisible scene of a face, respectively.
  • the basic process is to scan face recognition, reveal the basic information and dynamics of the identified user, and mark other social platform icons that can pull the content, and click on the icon to pop up the extended content.
  • the information generation and distribution module of the embodiment of the present invention is introduced below.
  • the information generated by this embodiment depends on the hardware used. Taking AR glasses as an example, the content mainly includes voice, pictures and video. It also includes information on AR device capabilities such as panoramic images and 3D models.
  • Interactive information can be preset expressions, comments, and more.
  • a special interactive information is the use of face recognition capabilities, the system can add static or dynamic three-dimensional decoration at the recognized face contour.
  • the publishing portal of this embodiment mainly includes personal dynamic information, and session information with others.
  • the personal dynamic information can be controlled by the authority of the publishing, and the conversation information with others includes the virtual world information of both parties and the recorded real world information.
  • the permission control part is divided into at least four categories, all visible, friends visible, specific friends visible, and only visible to themselves. People have different needs for privacy settings. The most widely visible control rights that people are willing to see are visible. The accounts that are extremely concerned about privacy can only be seen by friends, so that unfamiliar people can peek into their own information.
  • the AR glasses of this embodiment are installed with AR applications independent of other platforms, and the input and output of information are completed on the glasses platform.
  • the interactive portal is mainly based on face recognition, which simplifies. The process of information interaction.
  • the application environment of the embodiment of the present invention may be, but is not limited to, the reference to the application environment in the foregoing embodiment, which is not described in this embodiment.
  • An embodiment of the present invention provides an optional specific application for implementing the foregoing information interaction method.
  • the front camera of the AR glasses can perform automatic face recognition instead of virtual account search, and the virtual and superimposed ability of the glasses is used to superimpose and display the recognized user's data and social information in the form of AR, and then in reality. Engage in and within the social system. This provides a new AR social system based on faces rather than virtual accounts.
  • the interaction is mainly based on the situation in which the reality is not together.
  • the social interaction between people such as acquaintances will send messages point-to-point, and the timeline information flow will be viewed from time to time to find interest information and interaction.
  • the AR social system of this embodiment automatically recognizes the face, and Many usage scenarios are triggered when the reality meets, automatically displaying the other party's information to understand its dynamics.
  • the buddy scene can further show the historical conversations between the two parties to evoke the exchange of memory between the two parties. Instead of a friend scene, it is easier to find an opening topic that starts to communicate more naturally because of the other party's dynamics and information.
  • the AR social system not only communicates with the virtual world, but also contains real-world memories.
  • AR glasses can conveniently record the voice, image and video in reality, so that the interaction between the virtual world and the real world is all recorded in the AR social system, which makes the virtual and real coexistence and enriches the information type of the system.
  • the user can see what is obtained, and it is not necessary to switch the attention between the screen and the reality back and forth during recording as in the mobile phone platform. In retrospect, what was experienced was the perspective of the original record, which brought a more realistic feeling.
  • FIG. 12 is a schematic diagram of a basic information display according to an embodiment of the present invention.
  • the AR glasses scan the face of the real world, and automatically recognize the basic information of the user on the side of the face after recognition, for example, the user's name "Melissa Banks", the user's country “Hometown: Chicaga”, the user The birthday "Birthday: May, 23, 1987” and other information is superimposed on the real scene, and the friend "Add friend” and the message “Message” can also be displayed.
  • the basic information of the user is virtual, and other scenes are the real scenes in the real scene, thus achieving the purpose of combining virtual and real.
  • FIG. 13 is a schematic illustration of another basic information display in accordance with an embodiment of the present invention.
  • the flipping or clicking operation on the screen can display the personal dynamic information, wherein the personal dynamic information is the virtual content superimposed in the real world, and the personal dynamic of the system is in the order of the time axis.
  • the third-party available platform aggregation information can be displayed at the bottom icon. After clicking the platform icon, switch to the platform timeline information flow in the above image.
  • An expression refers to a single static or dynamic or 3D preset image without text. Comments are rich media, including text, voice, pictures and other freely organized information.
  • FIG. 14 is a schematic diagram of an AR information display according to an embodiment of the present invention. As shown in Figure 14 It shows that the AR information is displayed in a spherical manner, which enhances the interest of information display.
  • FIG. 15 is a schematic diagram of another AR information display according to an embodiment of the present invention.
  • the display manner of the AR can be three-dimensional graphics such as three-dimensional spirals and cylinders, thereby improving the interest of information display.
  • the three-dimensional display capability of AR is fully utilized to provide users with more interesting presentation modes. Including but not limited to three-dimensional spirals, spheres, cylinders.
  • This embodiment throws away the virtual account and provides a new way of augmented reality social game based on the face in reality.
  • common students such as classmates, friends, colleagues and even family members meet in real life when meeting, encountering, passing by, etc., usually do not open social software to search and understand each other's dynamics, and the last exchange Content.
  • This embodiment provides a natural and convenient way to automatically display the other party's information, dynamics and mutual communication sessions in the glasses when they meet.
  • this information itself has the effect of evoking the previous exchange of memories and understanding the latest developments of the other party, and also provides more topics and background information for the communication in the real world.
  • important exchanges in reality can also be fed back into the system as a memory.
  • the system may have a beneficial effect on the ripening of the person.
  • the system then provides notifications to the scanned person, allowing the swept to understand who is scanning themselves, and is expected to promote more social behavior.
  • the embodiment is most suitable for the AR glasses device with the front camera.
  • the user is convenient to carry and operate, and the user experience is improved.
  • the embodiment of the present invention is not limited to the AR glasses device, and the device having the camera can be used. Applicable, but there are differences in ease of use and interoperability.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
  • FIG. 16 is a schematic diagram of an information interaction apparatus according to an embodiment of the present invention.
  • the information interaction apparatus may include: a first acquisition unit 10, a second acquisition unit 20, a receiving unit 30, and a distribution unit 40.
  • the first obtaining unit 10 is configured to acquire facial information of the first target object.
  • the second obtaining unit 20 is configured to acquire target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate the social behavior of the first target object.
  • the receiving unit 30 is configured to receive interaction information that is sent by the second target object according to the target information, where the interaction information is used to indicate that the second target object interacts with the first target object.
  • the publishing unit 40 is arranged to publish the interaction information.
  • first obtaining unit 10, second obtaining unit 20, receiving unit 30 and issuing unit 40 may be operated in the terminal as part of the device, and may be implemented by the processor in the terminal.
  • the terminal can also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, an applause computer, and a mobile Internet device (MID), a PAD, and the like.
  • MID mobile Internet device
  • the receiving unit 30 includes: a first receiving module, configured to receive real interaction information in a real scene sent by the second target object according to the target information; and/or a second receiving module, configured to receive the second The virtual interaction information in the virtual scenario that the target object sends according to the target information.
  • the first receiving module and the second receiving module may be run in the terminal as part of the device, and the function implemented by the module may be performed by a processor in the terminal, and the terminal may also be a smart phone (eg, Terminal devices such as Android phones, iOS phones, etc., tablets, applause computers, and mobile Internet devices (MID), PAD, etc.
  • Terminal devices such as Android phones, iOS phones, etc., tablets, applause computers, and mobile Internet devices (MID), PAD, etc.
  • the information interaction device further includes: a first storage unit, configured to store the real interaction information to the preset storage location after receiving the real interaction information in the real scene sent by the second target object according to the target information; And/or the second storage unit is configured to store the virtual interaction information to the preset storage location after receiving the virtual interaction information in the virtual scenario sent by the second target object according to the target information.
  • a first storage unit configured to store the real interaction information to the preset storage location after receiving the real interaction information in the real scene sent by the second target object according to the target information
  • the second storage unit is configured to store the virtual interaction information to the preset storage location after receiving the virtual interaction information in the virtual scenario sent by the second target object according to the target information.
  • the foregoing first storage unit and the second storage unit may be operated in the terminal as a part of the device, and the function implemented by the foregoing module may be performed by a processor in the terminal, and the terminal may also be a smart phone (eg, Terminal devices such as Android phones, iOS phones, etc., tablets, applause computers, and mobile Internet devices (MID), PAD, etc.
  • Terminal devices such as Android phones, iOS phones, etc., tablets, applause computers, and mobile Internet devices (MID), PAD, etc.
  • the foregoing real interaction information includes at least one or more of the following: voice information in a real scene; image information in a real scene; and video information in a real scene.
  • the first obtaining unit 10 is configured to scan a face of the first target object to obtain face information of the first target object; the device further includes: a display unit, configured to be according to the first target object After the face information acquires the target information of the first target object, the target information is displayed at a preset spatial position of the real scene.
  • the first acquiring unit 10 and the display unit may be used as a device. Part of the system runs in the terminal, and the functions implemented by the above modules can be executed by the processor in the terminal.
  • the terminal can also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, an applause computer, and a mobile Internet device (Mobile Internet). Terminal devices such as Devices, MID) and PAD.
  • the second obtaining unit 20 includes: a first determining module, a second determining module, and a display module.
  • the first determining module is configured to determine a current spatial location of the first target object in the real scene;
  • the second determining module is configured to determine a display spatial location of the target information in the real scene according to the current spatial location;
  • the display module Is set to display the target information in the display space position.
  • the foregoing first determining module, the second determining module, and the display module may be run in the terminal as part of the device, and the function implemented by the foregoing module may be performed by a processor in the terminal, and the terminal may also be intelligent.
  • Mobile devices such as Android phones, iOS phones, etc.
  • tablets such as Samsung phones, iOS phones, etc.
  • applause computers such as Samsung Galaxy Tabs, Samsung Galaxy Tabs, etc.
  • MID mobile Internet devices
  • the display module is configured to perform at least one of: displaying the user profile information of the first target object in the first display space location when the target information includes the user profile information; and when the target information includes personal dynamic information,
  • the second display space position displays personal dynamic information of the first target object; when the target information includes the extended information, the extended information of the first target object is displayed in the third display space position; and when the target information includes historical interaction information, in the fourth
  • the display space display position displays historical interaction information generated by the second target object and the first target object during historical interaction.
  • the information interaction apparatus may include: a first acquisition unit 10, a second acquisition unit 20, a receiving unit 30, a distribution unit 40, and a display unit 50.
  • the display unit 50 includes a first determining module 51, a second determining module 52, and a display module 53.
  • the roles of the first obtaining unit 10, the second obtaining unit 20, the receiving unit 30, and the issuing unit 40 of the embodiment are the same as those in the information interaction device of the embodiment shown in FIG. The same is not repeated here.
  • the display unit 50 is configured to display the target information in a preset spatial position of the real scene after acquiring the target information of the first target object according to the facial information of the first target object.
  • the first judging module 51 is configured to determine whether the face feature data matching the face information of the first target object is stored in the server in the case of scanning the face of the first target object.
  • the second determining module 52 is configured to determine whether the face scanning authority of the first target object is allowed to scan when it is determined that the face feature data matching the face information of the first target object is stored in the server.
  • the display module 53 is configured to display visible information at a preset spatial location when it is determined that the facial scanning authority of the first target object is permitted, wherein the visible information includes at least user profile information of the first target object.
  • the first determining module 51, the second determining module 52, and the display module 53 may be run in the terminal as part of the device, and the functions implemented by the foregoing modules may be performed by a processor in the terminal, and the terminal also It can be a smart phone (such as Android phone, iOS phone, etc.), tablet computer, applause computer, and mobile Internet devices (MID), PAD and other terminal devices.
  • a smart phone such as Android phone, iOS phone, etc.
  • tablet computer such as Samsung phone, Samsung Galaxy Tabs, etc.
  • applause computer such as Samsung Galaxy Tabs, etc.
  • PAD mobile Internet devices
  • the visible information includes extended information of the first target object
  • the display module 53 includes: a determining submodule, a first receiving submodule, and a first displaying submodule.
  • the determining sub-module is configured to determine whether the first target object has account information of a third-party platform, wherein the extended information includes account information; and the first receiving sub-module is configured to determine that the first target object has a third party
  • the account information of the platform receives a first display instruction for indicating extended content corresponding to the account information; and the first display sub-module is configured to display the extended content at a preset spatial location after receiving the first display instruction.
  • the foregoing determining sub-module, the first receiving sub-module and the first displaying sub-module may be operated in the terminal as part of the device, and may be implemented by a processor in the terminal.
  • the functions implemented by the above modules can also be terminal devices such as smart phones (such as Android phones, iOS phones, etc.), tablet computers, applause computers, and mobile Internet devices (MID), PAD, and the like.
  • the visible information includes personal dynamic information of the first target object
  • the display module 53 includes: a second receiving submodule and a second displaying submodule.
  • the second receiving sub-module is configured to receive a second display instruction for indicating the display of the personal dynamic information; the second display sub-module is configured to display the personal dynamic information at the preset spatial location after receiving the second display instruction.
  • the foregoing second receiving submodule and the second displaying submodule may be run in the terminal as part of the device, and the function implemented by the foregoing module may be performed by a processor in the terminal, and the terminal may also be a smart phone. (such as Android phones, iOS phones, etc.), tablets, applause computers and mobile Internet devices (Mobile Internet Devices, MID), PAD and other terminal devices.
  • the information interaction device further includes: a first requesting unit, configured to send a first request to the server before acquiring the facial information of the first target object, wherein the first request carries the face with the first target object The information matching the facial feature data, the server responding to the first request, and storing the facial feature data of the first target object, the device further comprising: a second requesting unit configured to send a second request to the server, wherein the second Requesting to carry user profile information of the first target object, the server responding to the second request, and storing user profile information of the first target object; and/or the third requesting unit is configured to send a third request to the server, wherein the third The request carries extended information of the first target object, the server responds to the third request, and stores extended information of the first target object.
  • a first requesting unit configured to send a first request to the server before acquiring the facial information of the first target object, wherein the first request carries the face with the first target object The information matching the facial feature data, the server responding to the first request,
  • first request unit, the second request unit, and the third request unit may be run in the terminal as part of the device, and the functions implemented by the foregoing unit may be performed by a processor in the terminal, and the terminal may also It is a smart phone (such as Android phone, iOS phone, etc.), tablet computer, applause computer, and mobile Internet devices (MID), PAD and other terminal devices.
  • a smart phone such as Android phone, iOS phone, etc.
  • tablet computer such as Samsung phone, Samsung Galaxy Tabs, etc.
  • applause computer such as Samsung Galaxy Tabs, etc.
  • PAD mobile Internet devices
  • the first request unit includes: a first detection module, a first sending module, and a third sentence The module, the second detection module, the acquisition module, and the second transmission module.
  • the first detecting module is configured to detect a face
  • the first sending module is configured to, when the face of the first target object is detected, issue an instruction instruction for instructing the first target object to perform the preset facial action,
  • the first target object performs a facial motion according to the instruction instruction to obtain an actual facial motion
  • the third determining module is configured to determine whether the actual facial motion matches the preset facial motion
  • the second detecting module is configured to determine When the actual facial motion matches the preset facial motion, detecting whether the face of the first target object is a three-dimensional shape
  • the acquiring module is configured to acquire the first target when detecting that the face of the first target object is in a three-dimensional shape a facial feature data of the object
  • a second sending module configured to send the first request to the server according to the facial feature data
  • the second obtaining unit 20 is configured to request
  • the first detecting module, the first sending module, the third determining module, the second detecting module, the obtaining module, and the second sending module may be run in the terminal as part of the device, and may pass through the terminal.
  • the processor performs the functions implemented by the above modules, and the terminal may also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a mobile Internet device (MID), a PAD, and the like.
  • the information interaction device is further configured to: before receiving the interaction information sent by the second target object according to the target information, receiving the search information for indicating the search target information if the face of the first target object is not visible
  • the user profile information includes search information; and the target information is obtained according to the search information.
  • the information acquiring apparatus further includes: an identifying unit and an adding unit.
  • the identification unit is configured to: after acquiring the face information of the first target object, identify the facial contour of the first target object according to the facial information of the first target object; the adding unit is set to be added at the preset position of the facial contour Static and/or dynamic 3D image information.
  • the foregoing identification unit and the adding unit may be run in the terminal as part of the device, and the functions implemented by the above unit may be performed by a processor in the terminal, and the terminal may also be a smart phone (such as an Android mobile phone, iOS). Mobile phone, etc.), tablet, palm phone Brain and mobile Internet devices (MID), PAD and other terminal devices.
  • a smart phone such as an Android mobile phone, iOS). Mobile phone, etc.
  • tablet such as a Samsung phone Brain and mobile Internet devices (MID), PAD and other terminal devices.
  • MID mobile Internet devices
  • the publishing unit 40 is configured to perform at least one of: publishing interaction information in the form of a voice; publishing interaction information in the form of a picture, wherein the interaction information in the form of a picture comprises interaction information in the form of a panoramic picture; Information; publish interactive information for 3D models.
  • first obtaining unit 10 in this embodiment may be configured to perform step S202 in the embodiment of the present application
  • second obtaining unit 20 in this embodiment may be configured to perform the steps in the embodiment of the present application
  • the receiving unit 30 in this embodiment may be configured to perform step S206 in the embodiment of the present application
  • the issuing unit 40 in this embodiment may be configured to perform step S208 in the embodiment of the present application.
  • the first acquisition unit 10 acquires the face information of the first target object
  • the second acquisition unit 20 acquires the target information of the first target object according to the face information of the first target object, wherein the target information is used to indicate the first
  • the social behavior of the target object is received by the receiving unit 30, and the interaction information is sent by the second target object according to the target information, wherein the interaction information is used to indicate that the second target object interacts with the first target object, and the interaction information is released by the publishing unit 40.
  • the purpose of information interaction is achieved, thereby realizing the technical effect of simplifying the interaction process of information, thereby solving the complicated technical problem of the process of related technical information interaction.
  • the above-mentioned units and modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the above embodiments. It should be noted that the foregoing module may be implemented in a hardware environment as shown in FIG. 1 as part of the device, and may be implemented by software or by hardware, where the hardware environment includes a network environment.
  • a terminal for implementing the above information interaction method is further provided, wherein the terminal may be a computer terminal, and the computer terminal may be any one of the computer terminal groups.
  • the foregoing computer terminal may also be replaced with a terminal device such as a mobile terminal.
  • the computer terminal may be located in at least one network device of the plurality of network devices of the computer network.
  • FIG. 18 is a structural block diagram of a terminal according to an embodiment of the present invention.
  • the terminal may include one or more (only one shown in the figure) processor 181, memory 183, and transmission device 185.
  • the terminal may further include an input/output device 187. .
  • the memory 183 can be used to store software programs and modules, such as the information interaction method and the program instruction/module corresponding to the device in the embodiment of the present invention, and the processor 181 executes each of the software programs and modules stored in the memory 183.
  • a functional application and data processing, that is, the above information interaction method is implemented.
  • Memory 183 may include high speed random access memory, and may also include non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 183 can further include memory remotely located relative to processor 181, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the transmission device 185 described above is for receiving or transmitting data via a network, and can also be used for data transmission between the processor and the memory.
  • Specific examples of the above network may include a wired network and a wireless network.
  • the transmission device 185 includes a Network Interface Controller (NIC) that can be connected to other network devices and routers via a network cable to communicate with the Internet or a local area network.
  • the transmission device 185 is a Radio Frequency (RF) module for communicating with the Internet wirelessly.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • the memory 183 is used to store an application.
  • the processor 181 can call the application stored in the memory 183 through the transmission device 185 to perform the following steps:
  • target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate a social behavior of the first target object;
  • the processor 181 is further configured to: receive real interaction information in a real scenario sent by the second target object according to the target information; and/or receive a virtual interaction in the virtual scenario that is sent by the second target object according to the target information. information.
  • the processor 181 is further configured to: after receiving the real interaction information in the real scene sent by the second target object according to the target information, storing the real interaction information to the preset storage location; and/or receiving the second target After the object transmits the virtual interaction information in the virtual scenario according to the target information, the virtual interaction information is stored to the preset storage location.
  • the processor 181 is further configured to: scan a face of the first target object to obtain face information of the first target object; and display target information in a preset spatial position of the real scene according to the face information of the first target object.
  • the processor 181 is further configured to: determine a current spatial location of the first target object in the real scene; determine a display spatial location of the target information in the real scene according to the current spatial location; and display the target information in the display spatial location.
  • the processor 181 is further configured to: perform one of the following steps: displaying the user profile information of the first target object in the first display space position when the target information includes the user profile information; and in the second when the target information includes the personal dynamic information Displaying a spatial location to display personal dynamic information of the first target object; displaying, when the target information includes the extended information, extended information of the first target object in the third display space position; and when the target information includes historical interaction information, in the fourth display space
  • the display location displays historical interaction information generated by the second target object and the first target object during historical interaction.
  • the processor 181 is further configured to: scan the face; in the case of scanning the face of the first target object, determine whether the face feature data matching the face information of the first target object is stored in the server; if it is determined The server stores the face information of the first target object The matched facial feature data determines whether the face scanning authority of the first target object is allowed to scan; if it is determined that the face scanning authority of the first target object is allowed to scan, the visible information is displayed in the preset spatial position.
  • the processor 181 is further configured to: determine whether the first target object has account information of a third-party platform, wherein the extended information includes account information; and if it is determined that the first target object has account information of the third-party platform, the receiving And displaying a first display instruction for displaying the extended content corresponding to the account information; displaying the extended content at the preset spatial location after receiving the first display instruction.
  • the processor 181 is further configured to: receive a second display instruction for indicating the display of the personal dynamic information; and display the personal dynamic information at the preset spatial location after receiving the second display instruction.
  • the processor 181 is further configured to: send a first request to the server before acquiring the facial information of the first target object, where the first request carries facial feature data that matches the facial information of the first target object,
  • the server responds to the first request and stores facial feature data of the first target object
  • the processor 181 is further configured to perform at least the following steps: sending a second request to the server, where the second request carries the user profile information of the first target object And the server responds to the second request and stores user profile information of the first target object; and/or sends a third request to the server, wherein the third request carries extended information of the first target object, the server responds to the third request, and stores Extended information of the first target object.
  • the processor 181 is further configured to: perform a step of: detecting a face; and in a case of detecting a face of the first target object, issue an instruction instruction for instructing the first target object to perform a preset facial action, wherein the first target object Performing a facial motion according to the instruction instruction to obtain an actual facial motion; determining whether the actual facial motion matches the preset facial motion; if it is determined that the actual facial motion matches the preset facial motion, detecting whether the facial surface of the first target object is a three-dimensional shape Obtaining facial feature data of the first target object when detecting that the face of the first target object is in a three-dimensional shape; transmitting a first request to the server according to the facial feature data, the server responding to the first request, and storing the first target object The facial feature data; wherein the acquiring the target information of the first target object according to the facial information of the first target object comprises: the facial information according to the first target object
  • the requesting server delivers the target information according to the facial feature data; and receives the target
  • the processor 181 is further configured to: before receiving the interaction information sent by the second target object according to the target information, in a case where the face of the first target object is not visible, receiving search information for indicating the search target information,
  • the user profile information includes search information; and the target information is obtained according to the search information.
  • the processor 181 is further configured to: after acquiring the facial information of the first target object, identify the facial contour of the first target object according to the facial information of the first target object; add static and/or at the preset position of the facial contour Or dynamic 3D image information.
  • the processor 181 is further configured to perform at least one of the following steps: publishing the interaction information in the form of a voice; and releasing the interaction information in the form of a picture, where the interaction information in the form of a picture includes the interaction information in the form of a panoramic picture; and the interaction information in the form of a video is released; Publish interactive information for 3D models.
  • An embodiment of the present invention provides an information interaction method. Obtaining target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate a social behavior of the first target object; and receiving the second target object according to the target information
  • the interaction information is sent, wherein the interaction information is used to indicate that the second target object interacts with the first target object; the interaction information is released, and the purpose of the information interaction is achieved, thereby realizing the technical effect of simplifying the interaction process of the information, thereby solving the problem.
  • the technical process of related technical information interaction is a complex technical problem.
  • FIG. 18 is only schematic, and the terminal can be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palm computer, and a mobile Internet device (MID). Terminal equipment such as PAD.
  • FIG. 18 does not limit the structure of the above electronic device.
  • the terminal may also include more or less components (such as a network interface, display device, etc.) than shown in FIG. 18, or have a different configuration than that shown in FIG.
  • Embodiments of the present invention also provide a storage medium.
  • the foregoing storage medium may store program code, where the program code is used to execute the steps in the program code of the information interaction method provided by the foregoing method embodiment.
  • the foregoing storage medium may be located in any one of the computer terminal groups in the computer network, or in any one of the mobile terminal groups.
  • the storage medium is arranged to store program code for performing the following steps:
  • target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate a social behavior of the first target object;
  • the storage medium is further configured to store program code for performing the following steps: receiving real interaction information in a real scene sent by the second target object according to the target information; and/or receiving the second target object according to the target information The virtual interaction information sent in the virtual scene.
  • the storage medium is further configured to store program code for performing the following steps: storing the real interaction information to the preset storage location after receiving the real interaction information in the real scene sent by the second target object according to the target information And/or receiving the second target object according to the purpose After the virtual interaction information in the virtual scenario sent by the target information, the virtual interaction information is stored to the preset storage location.
  • the storage medium is further configured to store program code for performing the steps of: scanning a face of the first target object to obtain face information of the first target object; and pre-realizing the scene according to the face information of the first target object Set the spatial location to display the target information.
  • the storage medium is further configured to store program code for: determining a current spatial location of the first target object in the real scene; determining a display spatial location of the target information in the real scene according to the current spatial location; Display target information in the display space location.
  • the storage medium is further configured to store program code for performing one of the following steps: displaying the user profile information of the first target object in the first display space location when the target information includes the user profile information; When the personal dynamic information is included, the personal dynamic information of the first target object is displayed in the second display space position; when the target information includes the extended information, the extended information of the first target object is displayed in the third display space position; the target information includes the history When the information is exchanged, the historical interaction information generated by the second target object and the first target object during the historical interaction is displayed in the fourth display space display position.
  • the storage medium is further configured to store program code for performing the step of: determining whether a face matching the face information of the first target object is stored in the server in the case of scanning the face of the first target object Feature data; if it is determined that the face feature data matching the face information of the first target object is stored in the server, determining whether the face scan permission of the first target object is allowed to scan; if it is determined that the face scan permission of the first target object is The scanning is allowed to display visible information in a preset spatial location, wherein the visible information includes at least user profile information of the first target object.
  • the storage medium is further configured to store program code for performing: determining whether the first target object has account information of a third-party platform, wherein the extended information includes account information; if it is determined that the first target object has The account information of the third-party platform receives a first display instruction for indicating the extended content corresponding to the account information; and after displaying the first display instruction, displaying the extended content at the preset spatial location.
  • the storage medium is further configured to store program code for: receiving a second display instruction for indicating the display of personal dynamic information; displaying the personal dynamic at a preset spatial location after receiving the second display instruction information.
  • the storage medium is further configured to store program code for performing the step of: transmitting a first request to the server before acquiring the facial information of the first target object, wherein the first request carries the first target object
  • the facial feature data matches the facial feature data
  • the server responds to the first request, and stores facial feature data of the first target object
  • the storage medium is further configured to be configured to perform at least the following steps: sending a second request to the server, wherein, The second request carries the user profile information of the first target object, the server responds to the second request, and stores the user profile information of the first target object; and/or sends a third request to the server, wherein the third request carries the first target object Extending the information, the server responds to the third request and stores the extended information of the first target object.
  • the storage medium is further configured to store program code for performing the following steps: in the case of detecting the face of the first target object, issue an instruction instruction for instructing the first target object to perform the preset facial action,
  • the first target object performs a facial motion according to the instruction instruction to obtain an actual facial motion; determines whether the actual facial motion matches the preset facial motion; and if it is determined that the actual facial motion matches the preset facial motion, detecting the first target object Whether the face is a three-dimensional form; if the face of the first target object is detected to be in a three-dimensional shape, acquiring facial feature data of the first target object; transmitting a first request to the server according to the facial feature data, and the server responds to the first request, And storing the facial feature data of the first target object, wherein the acquiring the target information of the first target object according to the facial information of the first target object comprises: requesting, by the facial information information of the first target object, the delivery of the target information according to the facial feature data; Receive target information.
  • the storage medium is further configured to store program code for performing the following steps: before receiving the interaction information sent by the second target object according to the target information, in the case that the face of the first target object is not visible, the receiving The search information indicating the search target information, wherein the user profile information includes search information; and the target information is acquired according to the search information.
  • the storage medium is further configured to store program code for performing the following steps: after acquiring the face information of the first target object, identifying a facial contour of the first target object according to the facial information of the first target object; The preset position of the contour adds static and/or dynamic three-dimensional image information.
  • the storage medium is further configured to store program code for performing the following steps: releasing interaction information in the form of a voice; publishing interaction information in the form of a picture, wherein the interaction information in the form of a picture includes interaction information in the form of a panoramic picture; Interactive information in the form of video; publishing interactive information of the 3D model.
  • the foregoing storage medium may include, but not limited to, a USB flash drive, a read-only memory (ROM), a random access memory (RAM), a mobile hard disk, and a magnetic
  • ROM read-only memory
  • RAM random access memory
  • mobile hard disk a magnetic
  • magnetic A variety of media that can store program code, such as a disc or a disc.
  • the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
  • the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause one or more computer devices (which may be personal computers, servers, network devices, etc.) to perform various embodiments of the present invention All or part of the steps of the method described.
  • the disclosed client may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the object information of the first target object is acquired according to the face information of the first target object, wherein the target information is used to indicate the social behavior of the first target object;
  • the interaction of the second target object according to the target information Information wherein the interaction information is used to indicate that the second target object interacts with the first target object; and the interaction information is released.
  • the virtual account, the interactive entry is mainly based on the face information, which simplifies the process of information interaction and achieves the purpose of information interaction, thereby realizing the technical effect of simplifying the process of information interaction, and solving the complicated technology of the process of related technical information interaction. problem.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Des modes de réalisation de l'invention concernent un procédé, un appareil et un support de stockage pour l'interaction d'informations. Ledit procédé consiste à : acquérir les informations faciales d'un premier sujet cible; acquérir les informations cibles du premier sujet cible en fonction des informations faciales du premier sujet cible, les informations cibles étant utilisées pour indiquer un comportement social du premier sujet; recevoir les informations d'interaction qui sont envoyées par un second sujet cible en fonction des informations cibles, les informations d'interaction étant utilisées pour indiquer que le second sujet cible et le premier sujet cible interagissent; et générer les informations d'interaction. Les modes de réalisation de l'invention résolvent le problème technique lié à la technologie pertinente dans laquelle un processus d'interaction d'informations est complexe.
PCT/CN2017/115058 2016-11-25 2017-12-07 Procédé, appareil et support de stockage pour l'interaction d'informations WO2018095439A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611064419.9A CN108108012B (zh) 2016-11-25 2016-11-25 信息交互方法和装置
CN201611064419.9 2016-11-25

Publications (1)

Publication Number Publication Date
WO2018095439A1 true WO2018095439A1 (fr) 2018-05-31

Family

ID=62194802

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/115058 WO2018095439A1 (fr) 2016-11-25 2017-12-07 Procédé, appareil et support de stockage pour l'interaction d'informations

Country Status (2)

Country Link
CN (1) CN108108012B (fr)
WO (1) WO2018095439A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367402A (zh) * 2018-12-26 2020-07-03 阿里巴巴集团控股有限公司 任务触发方法、交互设备及计算机设备
CN111385337A (zh) * 2018-12-29 2020-07-07 阿里巴巴集团控股有限公司 跨空间互动方法、装置、设备、服务端及系统
WO2020236391A1 (fr) 2019-05-17 2020-11-26 Sensata Technologies, Inc. Réseau local de véhicule sans fil comprenant des capteurs de frein connectés
CN112306254A (zh) * 2019-07-31 2021-02-02 北京搜狗科技发展有限公司 一种表情处理方法、装置和介质
CN112817830A (zh) * 2021-03-01 2021-05-18 北京车和家信息技术有限公司 设置项的展示方法、装置、介质、设备、显示系统及车辆
CN114697686A (zh) * 2020-12-25 2022-07-01 北京达佳互联信息技术有限公司 一种线上互动方法、装置、服务器及存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109274575B (zh) * 2018-08-08 2020-07-24 阿里巴巴集团控股有限公司 消息发送方法及装置和电子设备
CN109276887B (zh) * 2018-09-21 2020-06-30 腾讯科技(深圳)有限公司 虚拟对象的信息显示方法、装置、设备及存储介质
CN110650081A (zh) * 2019-08-22 2020-01-03 南京洁源电力科技发展有限公司 一种虚拟现实即时通讯方法
CN111093033B (zh) * 2019-12-31 2021-08-06 维沃移动通信有限公司 一种信息处理方法及设备
CN111240471B (zh) * 2019-12-31 2023-02-03 维沃移动通信有限公司 信息交互方法及穿戴式设备
CN111355644B (zh) * 2020-02-19 2021-08-20 珠海格力电器股份有限公司 一种在不同空间之间进行信息交互的方法及系统
CN112235181A (zh) * 2020-08-29 2021-01-15 上海量明科技发展有限公司 弱社交方法、客户端及系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080052312A1 (en) * 2006-08-23 2008-02-28 Microsoft Corporation Image-Based Face Search
US20100277611A1 (en) * 2009-05-01 2010-11-04 Adam Holt Automatic content tagging, such as tagging digital images via a wireless cellular network using metadata and facial recognition
CN103970804A (zh) * 2013-02-06 2014-08-06 腾讯科技(深圳)有限公司 一种信息查询方法及装置

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI452527B (zh) * 2011-07-06 2014-09-11 Univ Nat Chiao Tung 基於擴增實境與雲端計算之應用程式執行方法與系統
US20130156274A1 (en) * 2011-12-19 2013-06-20 Microsoft Corporation Using photograph to initiate and perform action
KR20140015946A (ko) * 2012-07-27 2014-02-07 김소영 증강 현실을 이용한 정치인 홍보 시스템 및 방법
CN103870485B (zh) * 2012-12-13 2017-04-26 华为终端有限公司 实现增强现实应用的方法及设备
CN104426933B (zh) * 2013-08-23 2018-01-23 华为终端(东莞)有限公司 一种筛选增强现实内容的方法、装置及系统
CN103412953A (zh) * 2013-08-30 2013-11-27 苏州跨界软件科技有限公司 基于增强现实的社交方法
JP6459972B2 (ja) * 2013-11-13 2019-01-30 ソニー株式会社 表示制御装置、表示制御方法、およびプログラム
CN103942049B (zh) * 2014-04-14 2018-09-07 百度在线网络技术(北京)有限公司 增强现实的实现方法、客户端装置和服务器
CN105323252A (zh) * 2015-11-16 2016-02-10 上海璟世数字科技有限公司 基于增强现实技术实现互动的方法、系统和终端
CN105320282B (zh) * 2015-12-02 2018-12-25 广州经信纬通信息科技有限公司 一种基于增强现实的图像识别解决方法
CN105955456B (zh) * 2016-04-15 2018-09-04 深圳超多维科技有限公司 虚拟现实与增强现实融合的方法、装置及智能穿戴设备
CN106100983A (zh) * 2016-08-30 2016-11-09 黄在鑫 一种基于增强现实与gps定位技术的移动社交网络系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080052312A1 (en) * 2006-08-23 2008-02-28 Microsoft Corporation Image-Based Face Search
US20100277611A1 (en) * 2009-05-01 2010-11-04 Adam Holt Automatic content tagging, such as tagging digital images via a wireless cellular network using metadata and facial recognition
CN103970804A (zh) * 2013-02-06 2014-08-06 腾讯科技(深圳)有限公司 一种信息查询方法及装置

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367402A (zh) * 2018-12-26 2020-07-03 阿里巴巴集团控股有限公司 任务触发方法、交互设备及计算机设备
CN111367402B (zh) * 2018-12-26 2023-04-18 阿里巴巴集团控股有限公司 任务触发方法、交互设备及计算机设备
CN111385337A (zh) * 2018-12-29 2020-07-07 阿里巴巴集团控股有限公司 跨空间互动方法、装置、设备、服务端及系统
CN111385337B (zh) * 2018-12-29 2023-04-07 阿里巴巴集团控股有限公司 跨空间互动方法、装置、设备、服务端及系统
WO2020236391A1 (fr) 2019-05-17 2020-11-26 Sensata Technologies, Inc. Réseau local de véhicule sans fil comprenant des capteurs de frein connectés
CN112306254A (zh) * 2019-07-31 2021-02-02 北京搜狗科技发展有限公司 一种表情处理方法、装置和介质
CN114697686A (zh) * 2020-12-25 2022-07-01 北京达佳互联信息技术有限公司 一种线上互动方法、装置、服务器及存储介质
CN114697686B (zh) * 2020-12-25 2023-11-21 北京达佳互联信息技术有限公司 一种线上互动方法、装置、服务器及存储介质
CN112817830A (zh) * 2021-03-01 2021-05-18 北京车和家信息技术有限公司 设置项的展示方法、装置、介质、设备、显示系统及车辆
CN112817830B (zh) * 2021-03-01 2024-05-07 北京车和家信息技术有限公司 设置项的展示方法、装置、介质、设备、显示系统及车辆

Also Published As

Publication number Publication date
CN108108012B (zh) 2019-12-06
CN108108012A (zh) 2018-06-01

Similar Documents

Publication Publication Date Title
WO2018095439A1 (fr) Procédé, appareil et support de stockage pour l'interaction d'informations
US11722537B2 (en) Communication sessions between computing devices using dynamically customizable interaction environments
EP3713159B1 (fr) Galerie de messages ayant un intérêt partagé
US10402825B2 (en) Device, system, and method of enhancing user privacy and security within a location-based virtual social networking context
CN106716306B (zh) 将多个头戴式显示器同步到统一空间并且使统一空间中的对象移动关联
US20170262154A1 (en) Systems and methods for providing user tagging of content within a virtual scene
KR102077354B1 (ko) 통신 시스템
JP6229314B2 (ja) 情報処理装置、表示制御方法及びプログラム
EP2731348A2 (fr) Appareil et procédé de fourniture de service de réseau social utilisant une réalité augmentée
CN109691054A (zh) 动画用户标识符
US20130151603A1 (en) Persistent customized social media environment
JP7473556B2 (ja) 承諾確認
KR102030322B1 (ko) 비디오 프레임의 다수의 부분들에 대한 지문들을 생성함으로써 입체 비디오들을 검출하기 위한 방법들, 시스템들, 및 매체들
EP3272127B1 (fr) Système d'interaction sociale à base de vidéo
CN109155024A (zh) 与用户和接收设备共享内容
US11151381B2 (en) Proximity-based content sharing as an augmentation for imagery captured by a camera of a device
US20230353616A1 (en) Communication Sessions Between Devices Using Customizable Interaction Environments And Physical Location Determination
CN106464976A (zh) 显示设备、用户终端设备、服务器及其控制方法
KR20230098114A (ko) 위치 기반 아바타 메신저 서비스 제공 방법 및 장치
WO2024037001A1 (fr) Procédé et appareil de traitement de données d'interaction, dispositif électronique, support de stockage lisible par ordinateur et produit programme d'ordinateur
WO2022161289A1 (fr) Procédé et appareil d'affichage d'informations d'identité, et terminal, serveur et support de stockage
CN118805187A (zh) 应用中的交易验证
CN118354134A (zh) 视频播放方法、装置、电子设备和存储介质
WO2018005199A1 (fr) Systèmes et procédés de fourniture d'étiquetage d'utilisateur de contenu dans une scène virtuelle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17875022

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17875022

Country of ref document: EP

Kind code of ref document: A1