CN110089076B - Method and device for realizing information interaction - Google Patents

Method and device for realizing information interaction Download PDF

Info

Publication number
CN110089076B
CN110089076B CN201780054913.3A CN201780054913A CN110089076B CN 110089076 B CN110089076 B CN 110089076B CN 201780054913 A CN201780054913 A CN 201780054913A CN 110089076 B CN110089076 B CN 110089076B
Authority
CN
China
Prior art keywords
image
user
ugc
server
page
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780054913.3A
Other languages
Chinese (zh)
Other versions
CN110089076A (en
Inventor
覃冬
郑志铨
白和军
邓长友
肖鸿志
余宗桥
俞尚
陈宇
冯绪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of CN110089076A publication Critical patent/CN110089076A/en
Application granted granted Critical
Publication of CN110089076B publication Critical patent/CN110089076B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a method for realizing information interaction, which comprises the following steps: receiving a first image acquired by an application client and an ID (identity) of a first user logged in the application client, wherein the first image is sent by the application client; searching a second image matched with the image characteristics of the first image in the stored image set; when a second image matched with the image feature of the first image is found, the ID of the second image and the ID of the first user are sent to a UGC server, so that the UGC server obtains the ID of the second user in a social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, obtains UGC issued by a third user to the second image according to the ID of the second image and the ID of the second user, and sends the UGC and the ID of the third user issuing the UGC to the application client side for displaying.

Description

Method and device for realizing information interaction
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method and an apparatus for implementing information interaction.
Background
A social networking service is a platform for establishing connections of social networks or social relationships between people, such as, for example, interest sharing, activities, backgrounds, or real-life connections. A social networking service includes social connections representing each user (typically a profile) and various additional services. Most social networking services are web-based online community services that provide users with means to interact with the internet, such as email and instant messaging. Social networking sites allow users to share pictures, articles, activities, events, etc. in the interfaces they access.
Disclosure of Invention
The embodiment of the application provides a method and a device for realizing information interaction, which are used for improving the efficiency of information interaction and reducing the interaction times among devices.
A method for realizing information interaction comprises the following steps:
receiving a first image acquired by an application client and an ID (identity) of a first user logged in the application client, wherein the first image is sent by the application client;
searching a second image matched with the image characteristics of the first image in the stored image set;
when a second image matched with the image feature of the first image is found, the ID of the second image and the ID of the first user are sent to a UGC server, so that the UGC server obtains the ID of the second user in a social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, obtains UGC issued by a third user to the second image according to the ID of the second image and the ID of the second user, and sends the UGC and the ID of the third user issuing the UGC to the application client side for displaying.
A method for realizing information interaction comprises the following steps:
responding to the operation of a first user on an image acquisition function in a first page, and acquiring a first image;
sending the first image and the ID of the first user to an image server, so that the image server searches a second image matched with the image feature of the first image in a stored image set, sending the ID of the second image and the ID of the first user to an UGC server, so that the UGC server obtains the ID of a second user in a social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, and obtains first UGC published to the second image by a third user according to the ID of the second image and the ID of the second user;
obtaining the first UGC and the ID of the third user from the UGC server;
and generating a second page, and displaying the first image, the ID of the third user and the first UGC in the second page.
An apparatus for implementing information interaction, comprising:
the system comprises a receiving module, a judging module and a display module, wherein the receiving module is used for receiving a first image which is sent by an application client and acquired by the application client and an ID (identity) of a first user logged in the application client;
the matching module is used for searching a second image matched with the image characteristics of the first image in the stored image set;
and the processing module is used for sending the ID of the second image and the ID of the first user to a UGC server when the second image matched with the image feature of the first image is found, so that the UGC server obtains the ID of the second user in a social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, obtains UGC published by a third user for the second image according to the ID of the second image and the ID of the second user, and sends the UGC and the ID of the third user publishing the UGC to the application client for display.
An apparatus for implementing information interaction, comprising:
the acquisition module is used for responding to the operation of a first user on the image acquisition function in the first page and acquiring a first image;
a sending module, configured to send the first image and the ID of the first user to an image server, so that the image server searches for a second image matching with an image feature of the first image in a stored image set, send the ID of the second image and the ID of the first user to an UGC server, so that the UGC server obtains, according to the ID of the first user, an ID of a second user in a social relationship chain corresponding to the ID of the first user from a social application server, and obtains, according to the ID of the second image and the ID of the second user, first UGC published to the second image by a third user;
and the display module is used for acquiring the first UGC and the ID of the third user from the UGC server, generating a second page, and displaying the first image, the ID of the third user and the first UGC in the second page.
An apparatus for implementing information interaction, comprising: a processor and a memory;
the processor executes machine-readable storage execution stored in the memory to:
responding to the operation of a first user on an image acquisition function in a first page, and acquiring a first image;
sending the first image and the identifier ID of the first user to an image server, so that the image server searches a second image matched with the image feature of the first image in a stored image set, sending the ID of the second image and the ID of the first user to a user original content UGC server, so that the UGC server obtains the ID of a second user in a social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, and obtains first UGC published by a third user on the second image according to the ID of the second image and the ID of the second user;
and acquiring the first UGC and the ID of the third user from the UGC server, generating a second page, and displaying the first image, the ID of the third user and the first UGC in the second page.
A method for realizing information interaction is applied to an image server, and the image server comprises: a processor and a memory, the processor executing machine-readable stored instructions stored in the memory to:
receiving a first image acquired by an application client and an identifier ID of a first user logged in the application client, wherein the first image is sent by the application client;
searching a second image matched with the image characteristics of the first image in the stored image set;
when a second image matched with the image characteristics of the first image is found, the ID of the second image and the ID of the first user are sent to a user original content UGC server, so that the UGC server obtains the ID of the second user in a social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, obtains UGC published by a third user on the second image according to the ID of the second image and the ID of the second user, and sends the UGC and the ID of the third user publishing the UGC to the application client side for display.
A method for realizing information interaction is applied to an application client, and the application client comprises the following steps: a processor and a memory, the processor executing machine-readable stored instructions stored in the memory to:
responding to the operation of a first user on an image acquisition function in a first page, and acquiring a first image;
sending the first image and the identifier ID of the first user to an image server, so that the image server searches a second image matched with the image feature of the first image in a stored image set, sending the ID of the second image and the ID of the first user to a user original content UGC server, so that the UGC server obtains the ID of a second user in a social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, and obtains first UGC published by a third user on the second image according to the ID of the second image and the ID of the second user;
and acquiring the first UGC and the ID of the third user from the UGC server, generating a second page, and displaying the first image, the ID of the third user and the first UGC in the second page.
In the embodiment of the application, a first image sent by an application client and acquired by the application client and an ID of a first user logged in the application client can be received, searching the stored image set for a second image that matches the image feature of the first image, when a second image matching the image feature of the first image is found, sending the ID of the second image and the ID of the first user to a UGC server, so that the UGC server acquires the ID of the second user in the social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, obtaining UGC issued by a third user to the second image according to the ID of the second image and the ID of the second user, and sending the UGC and the ID of a third user who releases the UGC to the application client side for displaying. By utilizing the technical scheme provided by the embodiment of the application, through interaction between the devices, UGC published by a user for the determined image can be quickly acquired, the information interaction efficiency is improved, the interaction times between the devices are reduced, and the system time and resources are saved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1A is a schematic block diagram of an implementation environment in accordance with various embodiments of the present disclosure;
fig. 1B is a schematic flowchart of a method for implementing information interaction according to an embodiment of the present disclosure;
fig. 2A is a schematic flowchart of a method for implementing information interaction according to an embodiment of the present disclosure;
fig. 2B is a schematic flowchart of a method for implementing information interaction according to an embodiment of the present disclosure;
fig. 3A is a schematic flowchart of a method for implementing information interaction according to an embodiment of the present disclosure;
fig. 3B is a schematic flowchart of a method for implementing information interaction according to an embodiment of the present disclosure;
FIG. 4A is a schematic view of a page in the embodiment of the present application;
fig. 4B is a schematic diagram of a dynamic information display page provided in the embodiment of the present application;
fig. 4C is a schematic view of a shooting interface provided in the embodiment of the present application;
FIG. 4D is a schematic diagram of an interaction page in the embodiment of the present application;
fig. 4E is a schematic diagram of an interaction page provided in the embodiment of the present application;
fig. 4F is a schematic diagram of an interactive page provided in the present embodiment;
fig. 4G is a schematic diagram of an interaction page provided in the embodiment of the present application;
fig. 4H is a schematic diagram of an interaction page provided in the embodiment of the present application;
fig. 5 is a schematic structural diagram of an apparatus for implementing information interaction according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an apparatus for implementing information interaction according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of an apparatus for implementing information interaction in the embodiment of the present application;
fig. 8 is a schematic structural diagram of an apparatus for implementing information interaction in the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in detail with reference to the accompanying drawings.
Fig. 1A is a schematic structural diagram of an implementation environment according to embodiments of the present application. As shown in fig. 1A, the implementation environment includes: an application client 110, an image server 120, a User Generated Content (UGC) server 130, and a social application server 140.
The application client 110 may be a terminal device such as a PC, a notebook computer, a mobile phone, or a tablet computer, or may be an application running on the terminal device, such as a social application.
The image server 120 may be a server, a server cluster composed of several servers, or a cloud computing service center. The image server 120 may interact with the application client 110, UGC server 130, and social application server 140 for storing, parsing, matching, and providing images sent by users.
The UGC server 130 may be a server, a server cluster composed of several servers, or a cloud computing service center. The UGC server 130 can interact with the application client 110, image server 120, and social application server 140 for storing, matching, and providing UGC.
The social application server 140 may be a server, a server cluster composed of several servers, or a cloud computing service center. The social application server 140 may interact with the application client 110, the UGC server 130, and the image server 120, and is configured to obtain images and UGC, generate dynamic information (Feeds), and send the Feeds to corresponding application clients.
Fig. 1B is a schematic flowchart of a method for implementing information interaction according to an embodiment of the present disclosure. As shown in fig. 1B, the method includes the steps of:
step 201, receiving a first image acquired by an application client and sent by the application client and an Identifier (ID) of a first user logged in to the application client.
Step 202, searching a second image matched with the image characteristics of the first image in the stored image set.
Step 203, when finding a second image matched with the image feature of the first image, sending the ID of the second image and the ID of the first user to a user original content (UGC) server, so that the UGC server obtains the ID of the second user in a social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, obtains UGC published to the second image by the third user according to the ID of the second image and the ID of the second user, and sends the UGC and the ID of a third user publishing the UGC to the application client for display.
In the embodiment of the application, a first image sent by an application client and acquired by the application client and an ID of a first user logged in the application client can be received, searching the stored image set for a second image that matches the image feature of the first image, when a second image matching the image feature of the first image is found, sending the ID of the second image and the ID of the first user to a UGC server, so that the UGC server acquires the ID of the second user in the social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, obtaining UGC issued by a third user to the second image according to the ID of the second image and the ID of the second user, and sending the UGC and the ID of a third user who releases the UGC to the application client side for displaying. By utilizing the technical scheme provided by the embodiment of the application, through interaction between the devices, UGC published by a user for the determined image can be quickly acquired, the information interaction efficiency is improved, the interaction times between the devices are reduced, and the system time and resources are saved.
Fig. 2A is a schematic flowchart of a method for implementing information interaction according to an embodiment of the present disclosure. As shown in fig. 2A, in the present embodiment, the method includes the following steps.
In step 201A, the application client responds to the operation of the user on the image acquisition control in the first page to acquire the first image.
In this embodiment, the application client may also analyze the first image to obtain an image feature of the first image. Therefore, in this embodiment, the first image sent to the image server may be the first image itself acquired by the application client, or may also be an image feature and/or a value of an image feature of the first image obtained by the application client analyzing the first image.
In this implementation, the image characteristics of an image may include parameter values used to describe and determine the image, e.g., a picture.
In step 202A, the application client sends the first image and the ID of the user logged in to the application client to the middle tier device.
In embodiments of the application, the middle tier device may be used to adapt and optimize communications between the application client, the image server, and the UGC server.
In step 203A, the middle layer device sends the first image to the image server.
Step 204A, the image server searches a second image matched with the image characteristics of the first image in the stored image set, and if the second image is found, step 205A is executed; otherwise, step 207A is performed.
If the image server receives the first image, analyzing the first image to obtain the image characteristics of the first image, searching a second image, between the value of the image characteristics and the value of the image characteristics of the first image, of which the value is smaller than a preset threshold value in the stored image set, namely comparing the value of the image characteristics of the first image with the value of the image characteristics of each image in the stored image set one by one, and determining the image of which the difference value between the value of the image characteristics of the first image and the value of the image characteristics of each image in the stored image set is smaller than the preset threshold value as the second image.
And if the image server receives the image characteristics and/or the values of the image characteristics of the first image, comparing the values of the image characteristics of the first image with the values of the image characteristics of the images in the stored image set one by one, and determining the image of which the difference value between the two is smaller than a preset threshold value as a second image.
Step 205A, acquiring the ID of the second image, and sending the ID of the second image to the middle layer device.
In step 206A, the middle layer device sends the ID of the second image to the application client for storage.
Step 207A generates an error code or saves the first image in the image set and assigns an ID to the first image.
In this step, the image server may assign an ID to the first image, and store the ID of the first image and the image feature of the first image correspondingly. In addition, the ID of the first user may also be stored correspondingly. Storing the first user's ID may be used to identify the first user as the creator of the first image.
At step 208A, the image server may send the error code or the ID of the first image to the intermediate tier device.
In step 209A, the middle layer device sends the error code or the ID of the first image to the application client for storage.
And when the application client device receives the error code, the application client device knows that the image server does not store a second image matched with the first image, and the process is ended.
And when the application client receives the ID of the first image, storing the ID of the first image for later use in social interaction.
In this embodiment, the application client and the image server communicate through the middle layer device, but in an embodiment of the present application, a scheme of communicating between the application client and the image server may be adopted instead of through the middle layer device.
Fig. 2B is a schematic flowchart of a method for implementing information interaction according to an embodiment of the present disclosure. As shown in fig. 2B, in the present embodiment, after the image server sends the ID of the second image to the middle layer device in step 205A, the method includes the following steps.
In step 201B, the middle layer device sends the ID of the second image and the login status of the user to the UGC server.
In this step, the login state may be information, such as a ticket allocated to the user, allocated by the social application server to the user for authenticating the user after the user logs in the application client. Typically, the login state is stored in the application client and sent to the UGC server through the middle layer device and/or the image server.
Step 202B, the UGC server judges the authority of the user by using the login state of the user, and executes step 203B when the authority for operating the UGC is determined to be allocated to the user, otherwise, the flow is ended.
Step 203B, the UGC server sends a request to a social application server, where the request carries the ID of the first user to request to obtain the ID of the second user in the social relationship chain corresponding to the ID of the first user.
Step 204B, the social application server sends the ID of the second user in the social relation chain corresponding to the first user ID to the UGC server.
In this step, the number of the IDs of the second user returned to the UGC server by the social application server may be one or more. In this step, for example, the ID of the second user includes: user ID 1-User ID 4.
In step 205B, the UGC server obtains the UGC published to the second image by the third user according to the ID of the second image and the ID of the second user.
In this step, the UGC server may obtain UGC published to the second image by the third user through the following steps.
First, the UGC server queries the following table 1 to obtain the ID of the user who published the UGC for the second image.
Figure GDA0001987756720000101
TABLE 1
Using table 1, the ID of the user who has commented on the second image can be found to include: user ID1, User ID4, User ID6, User ID7, and User ID 8. The ID of the second user in the social relation chain corresponding to the ID of the first user, acquired by the UGC server, comprises: user ID 1-User ID 4. The second image is commented by taking the intersection of the two images, and the ID of the second user in the social relationship chain corresponding to the ID of the first user includes: user ID1 and User ID4, the User ID1 and User ID4 being the ID of the third User.
Next, the UGC server searches for the UGC ID corresponding to the index key in table 2 below, using the ID of the second image and the ID of the third user as the index key.
ID of image ID of third user UGC ID
ID of second image User ID1 UGC ID1
........... ............. ...............
............. .............. .............
ID of second image User ID4 UGC ID4
TABLE 2
UGC ID1 and UGC ID4 can be found by looking up table 2 using the ID of the second image and the User ID1, and the ID of the second image and the User ID4, respectively, as index keys.
Then, the UGC server acquires the corresponding UGC1 and UGC4 by using the UGC ID1 and the UGC ID4, respectively.
In step 206B, the UGC server sends the ID of the third user and the UGC published by the third user for the second image to the application client in the form of a list.
In this step, the UGC server may generate a list as shown in table 3 below and send the list to the application client.
ID of third user UGC
User ID1 UGC1
User ID4 UGC4
TABLE 3
In this step, after the application client receives the table 3, the IDs of the third User, i.e., the User ID1 and the User ID4, are obtained. And acquiring the head portrait of the corresponding User from the social application server according to the User ID1 and the User ID4, and correspondingly displaying the information of each third User, such as head portrait information. The User ID1 and UGC1, and User ID4 and UGC4 are shown correspondingly.
Fig. 3A is a schematic flowchart of a method for implementing information interaction according to an embodiment of the present disclosure. As shown in fig. 3A, the method includes the steps of:
step 301, in response to the operation of the first user on the image acquisition function in the first page, acquiring a first image.
Step 302, sending the first image and the ID of the first user to an image server, so that the image server searches for a second image matching with the image feature of the first image in a stored image set, sending the ID of the second image and the ID of the first user to a UGC server, so that the UGC server obtains, according to the ID of the first user, an ID of a second user in a social relation chain corresponding to the ID of the first user from a social application server, and obtains, according to the ID of the second image and the ID of the second user, first UGC published to the second image by a third user;
step 303, obtaining the first UGC and the ID of the third user from the UGC server, generating a second page, and displaying the first image, the ID of the third user and the first UGC in the second page.
In an embodiment of the present application, in step 301, the application client may obtain the first image in the following ways.
In a first mode, the application client calls a camera to shoot the first image in response to the operation of the first user on a shooting control in the application client.
In a second mode, the application client calls a camera to shoot the first image in response to the operation of the first user on the shooting control associated with the first dynamic information displayed in the first page.
In a third mode, the application client side responds to the operation of the first user on the image selection control in the first page, and selects and loads the first image from a local place or a network.
In an embodiment of the application, the first user may also generate the second UGC by using the application client for the comment of the first user on the first image, and the specific operations are as follows: and acquiring the ID of the second image from the UGC server, responding to the operation of the first user on a content creation control on the second page, generating a third page, receiving a second UGC input by the first user in the third page, and sending the second UGC, the ID of the first user and the ID of the second image to the UGC server, so that the UGC server allocates a second UGC ID for the second UGC and correspondingly stores the ID of the second image, the ID of the first user, the ID of the second UGC and the second UGC. The ID of the second image and the ID of the first user may be stored in table 1 described above. In an embodiment of the present application, the ID of the second image, the ID of the first user, the ID of the second UGC, and the second UGC may be stored in a table.
In an embodiment of the application, after the first image is correspondingly displayed in the application client, the ID of the third user of the first UGC is published, and the first image and the related information can also be published in a social network. The specific operation comprises the following steps: responding to the operation of the first user on the sharing control in the third page, acquiring the first image, and sending the first image and the ID of the first user to the social application server, so that the social application server generates and releases second dynamic information according to the ID of the first user and the first image.
In an embodiment of the application, the first page is further configured to present information, associated with the first dynamic information, for indicating a shooting area, so that the first user determines the shooting area of the shooting device according to the information for indicating the shooting area. When seeing the first dynamic information and the object which is the same as the object shown by the first dynamic information, the user can align the lens of the shooting device with the shooting area indicated in the information according to the information for indicating the shooting area, and shoot the object to obtain the first image. The shot objects are the same, and the shooting areas are the same, so that the obtained first image and the image in the first dynamic information have more same image characteristics, and further the first image and the second image stored by the image server have more same image characteristics. In this way, after the application client of the user sends the first image to the image server, the image server can find the second image matching the first image in the stored image set because the difference between the value of the image feature of the first image and the value of the image feature of the second image is smaller than the set threshold.
In an embodiment of the present application, the first image and the second image are pictures, a first keypoint of the first image and a second keypoint of the second image are extracted by using a FAST-based segmentation detection Feature (FAST) algorithm, the first keypoint and the second keypoint of the first image are processed by using a speedup Robust Features (SURF) algorithm to obtain a first feature vector and a second feature vector, the number of layers of a pyramid where the first image is located is determined according to a grayscale image pyramid of the second image, a euclidean distance between the first feature vector and the second feature vector is calculated based on the number of layers of the determined pyramid, and when the obtained euclidean distance is smaller than a set threshold, it may be determined that an image feature of the second image matches an image feature of the first image.
In an embodiment of the application, when the application client performs shooting on the object to obtain the first image in response to an operation of the first user on a shooting control associated with first dynamic information displayed in a first page, the first user may issue UGC to the first dynamic information, where the specific method includes: receiving third UGC input by the first user in the first page; sending the ID of the first user, the ID of the first dynamic information, and the third UGC to the social application server, so that the social application server generates comment information according to the ID of the first dynamic information, the ID of the first user, and the third UGC and publishes the comment information, for example, sending the comment information to the application client to which the ID of the user who initiated the second dynamic information update request logs in, and sending the comment information to the application client to which the ID of the user who initiated the second dynamic information update request logs in response to the second dynamic information update request.
In an embodiment of the present application, obtaining the first UGC and the ID of the third user from the UGC server, and displaying the first image, the ID of the third user, and the first UGC in the second page may include the following two ways:
in the first mode, head portraits corresponding to a plurality of third users are obtained from the social application server according to the IDs of the third users; sequentially displaying head portraits of the plurality of third users in the second page; responding to the operation of the first user on the head portrait of one of the third users, and generating a UGC acquisition request containing the ID of the third user with the operated head portrait; sending the UGC acquisition request to the UGC server; and obtaining and displaying UGC issued by a third user with the operated head portrait from the UGC server.
In the second mode, the IDs of a plurality of third users and the published first UGC of the third users are obtained from the UGC server, and the IDs of the third users and the published first UGC of the third users are correspondingly displayed.
Fig. 3B is a schematic flowchart of a method for implementing information interaction according to an embodiment of the present disclosure.
Augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding a corresponding image, video, or three-dimensional (3D) model. The goal of this technology is to fit and interact with the real world around the virtual world on the screen. In this embodiment, the entity performing information interaction includes: the application client side, the middle layer, the image server, the UGC server and the social application server. The application client may be a social application program, the interactive image may be a picture such as a Marker in this embodiment, the image server may be a Marker server, the middle layer may be an AR camera middle layer, and the UGC server may be an AR camera UGC server.
As shown in fig. 3B, the method includes the following steps.
Step 401, a first page is presented in a social application, wherein the first page comprises an icon of an AR camera.
In an embodiment of the present application, the social application may be a social application installed on a mobile terminal device, for example, a mobile phone. The first user may log into the social application using the first user's ID. The icon of the social application program can be displayed on the screen of the mobile phone, and when the user triggers the icon of the social application program, the first page is displayed on the screen of the mobile phone.
As shown in fig. 4A, fig. 4A is a schematic diagram of a page in the embodiment of the present application. In the first page illustrated in this fig. 4A, a plurality of icons of APPs are included, one of which is an icon of an AR camera.
Step 402, in response to the click operation of the first user on the icon of the AR camera, the social application invokes a shooting device of the mobile phone, for example, a camera built in or external to the mobile phone, to shoot to obtain a first picture, that is, a first Marker (Marker).
In an embodiment of the present application, when the user triggers the icon 10A of the AR camera shown in fig. 4A, the social application program may call a camera of the mobile phone to perform shooting to obtain a first Marker. In an embodiment of the present application, when the user triggers an "AR camera" control 10B associated with a piece of dynamic information displayed in the dynamic information display page of the social application, the social application may call the camera of the mobile phone. As shown in fig. 4B, fig. 4B is a schematic diagram of a dynamic information presentation page provided in the embodiment of the present application. The dynamic information comprises an AR camera control 10B, when the first user sees the dynamic information and finds that a bottle identical to the bottle shown in the dynamic information exists in front of the dynamic information, the first user can click the AR camera control 10B, and then the social application program calls a camera of the mobile phone to shoot to obtain a first Marker.
Fig. 4C is a schematic view of a shooting interface provided in the embodiment of the present application. In fig. 4C, the first user may aim the camera of the mobile phone at the bottle, display a circular timer on the interface of the social application, and when the timer expires or when the first user clicks the photo button, obtain a picture of the bottle, that is, the first Marker.
In an embodiment of the present application, the first user can comment on the dynamic information in the dynamic information presentation page, including: the social application program receives UGC input by the first user in the dynamic information display page, responds to an operation on a publishing control in the first page, namely a 'submit comment' control in FIG. 4B, and sends the ID of the first user, the ID of the dynamic information and the UGC input by the first user to the social application server, so that the social application server generates comment information according to the ID of the dynamic information, the ID of the first user and the UGC and publishes the comment information.
In step 403, the social application sends the first Marker and the ID of the first user to a Marker server through the AR camera middle layer.
And step 404, the Marker server searches whether a second Marker matched with the image characteristic of the first Marker exists in the stored Marker set. If the second Marker is not found, go to step 405, otherwise go to step 406.
In this step, the Marker server may search the stored Marker set for a second Marker whose difference between the value of the image feature and the value of the image feature of the first Marker is smaller than a preset threshold. The specific searching method comprises the following steps: the method comprises the steps of extracting a first key point of a first Marker and a second key point of a second Marker by using a FAST algorithm, processing the first key point and the second key point by using a SURF algorithm to obtain a first feature vector and a second feature vector, determining the number of layers of a pyramid where the first Marker is located according to a gray image pyramid of the second Marker, calculating Euclidean distances of the first feature vector and the second feature vector based on the determined number of layers of the pyramid, and determining the second Marker as the Marker with a difference value between a value of an image feature and a value of the image feature of the first Marker smaller than a preset threshold when the obtained Euclidean distance is smaller than the preset threshold.
And 405, storing the first Marker in the Marker set, assigning an ID to the first Marker, and correspondingly storing the ID of the first Marker and the value of the image characteristic/image characteristic of the first Marker. The ID of the first user may be further stored to identify the user who created the first Marker.
Step 406, the Marker server sends the ID of the second Marker to the AR camera middle layer.
Step 407, the AR camera middle layer sends the ID of the second Marker and the obtained login status of the first user to the AR camera UGC server.
In an embodiment of the present application, the login status of the first user may be an operation right previously allocated to the first user, for example, an operation right to the UGC.
Step 408, the AR camera UGC server verifies the operation right of the user to the UGC by using the login state of the first user, and when the user is determined to have the operation right to the UGC, step 409 is executed, otherwise, a rejection notice is sent to the social application program.
In step 409, the AR camera UGC server sends a request including the ID of the first user to a social application server to request to obtain the ID of the second user in the social relationship chain corresponding to the ID of the first user.
At step 410, the social application server returns the ID of the second user to the AR camera UGC server.
In step 411, the AR camera UGC server obtains the ID of the third user and the UGC published by the third user to the second Marker according to the ID of the second Marker and the ID of the second user by using the method described in the above step 205B.
Step 412, sending the ID of the third user to the social application through the AR camera middle layer.
In this step, the IDs of one or more third users may be found, and the IDs of the third users are sent to the social application program in the form of a list.
In step 413, the social application generates an avatar request including the ID of the third user, and sends the avatar request to the social application server.
And step 414, the social application server obtains the head portrait of each third user according to the ID of the third user, and returns the head portrait to the social application program.
Step 415, the social application generates a second page in which the avatar of the third user is presented in turn.
As shown in fig. 4D, fig. 4D is a schematic diagram of an interactive page in the embodiment of the present application. As shown in fig. 4D, the second page shows the head portraits of a plurality of third users, and in the case that the head portraits of all the third users cannot be shown on one screen, the head portraits of the rest third users can be shown by sliding left and right.
In response to the first user clicking on one of the third users' avatars, for example, the avatar of the user with ID1, the social application generates a UGC acquisition request containing the ID1, step 416.
In this step, the clicked avatar is displayed in a highlighted manner at the same time.
In step 417, the social application sends a UGC get request containing the ID1 to the AR camera UGC server.
In step 418, the UGC server searches for UGC1 and UGC1ID published by the third user for the second Marker by using the ID1, and returns the UGC1 and the UGC1ID to the social application.
Step 419, the social application displays the UGC1 in the second page.
In one embodiment of the subject application, the UGC1 can be text, graphics, audio, video, and the like. The coordinate information of the UGC1 may be further received from the UGC server, and the social application displays the UGC1 in the page shown in fig. 4D according to the coordinate information. As shown in FIG. 4D, the social application may display the UGC1 in the form of a bubble in the bubble, with the UGC1 shown in FIG. 4D being a video.
Step 420, in response to the comment viewing triggering operation of the first user, the social application program generates a comment viewing request carrying the ID1 and the UGC1ID, and sends the comment viewing request to the AR camera UGC server.
In this step, when the first user performs a swipe up action from the bottom of the page in the page shown in FIG. 4D, the social application generates a comment viewing request carrying the ID1 and the UGC1 ID.
In step 421, the AR camera UGC server searches for the ID of the fourth user who comments on the UGC1 and the published UGC thereof according to the ID1 and the UGC1ID, and sends the ID of the fourth user and the published UGC thereof to the social application.
In step 422, the social application correspondingly displays the ID of the fourth user who has commented on the UGC1 and the UGC posted by the fourth user to the UGC1 in the second page.
In step 423, the social application receives a comment, such as UGC2, issued to the UGC1 and input by the first user, generates a first comment issuance request including the ID, the UGC1ID, the ID1 and the UGC2 of the first user in response to the triggering operation of the comment issuance control on the second page by the first user, and sends the first comment issuance request to the AR camera UGC server.
Step 424, the AR camera UGC server assigns an ID to the UGC2, and correspondingly stores the UGC1ID, the ID of the first user, the ID1, and the UGC2 ID.
Step 425 generates a third page in response to a triggering request by the first user to a content creation control on the second page, such as an "I am to create" control.
At step 426, the social application receives UGC3 entered by the first user in the third page.
As shown in fig. 4E, fig. 4E is a schematic diagram of an interactive page provided in the embodiment of the present application. At the bottom of the page shown in this FIG. 4E are displayed, in order, a "text" control 10E, an "audio" control 20E, a "video" control 30E, a "picture" control 40E, and a "share" control 50E. The user can click any one of the first four controls to issue comments in a corresponding form, for example, when the user clicks a "video control", a video file can be obtained locally or a camera of the mobile phone is called to shoot a video. When the first user clicks the "text" control 10E in the first four controls, a comment information input interface is displayed in the third page, as shown in fig. 4F, where fig. 4F is a schematic diagram of the interactive page provided in this embodiment. In the page shown in fig. 4F, the first user clicks any position of the third page, for example, the area shown in the bottle circle, and the social application responds to the click operation to generate a bubble 10F. The first user can input the comment "input the intended word" in the text input box 20F, and the comment "input the intended word" is displayed correspondingly in the bubble 10F.
After the first user input text UGC3 "input the intended words", another form of comment is also to be published, for example, clicking the "picture" control 40E in fig. 4E, the interface shown in fig. 4G is displayed in the third page, and fig. 4G is a schematic diagram of the interactive page provided in the embodiment of the present application. In fig. 4G, a bubble 10G, a "fine matching picture" control 20G, a "cell phone album" control 30G, a "space album" control 40G, and a "cancel" control 50G are displayed, respectively. When the first user clicks any one of the first three controls, for example, the "mobile phone album" control 30G, the social application program obtains a picture from the mobile phone album.
In step 427, the social application generates a second comment posting request including the ID of the first Marker, the ID of the first user and the comment posted by the first user, that is, UGC3, in response to the operation of the first user on the "submit comment" control in the third page, and sends the second comment posting request to the AR camera UGC server.
In this step, the position coordinates of the UGC3 in the first Marker can be further included in the second comment posting request.
In step 428, the AR camera UGC server assigns an ID and UGC3ID to the UGC3, and correspondingly stores the ID of the first Marker, the ID of the first user, the UGC3ID and the UGC 3.
When the position coordinates of the UGC3 in the first Marker are carried in the second comment publishing request, the AR camera UGC server may correspondingly store the ID of the first Marker, the ID of the first user, the position coordinates of the UGC3ID, the UGC3 and the UGC3 in the first Marker.
Step 429, the social application program responds to the operation of the first user on the "share" control in the page shown in fig. 4E, and obtains the first Marker, and processes the first Marker.
As shown in fig. 4H, fig. 4H is a schematic diagram of an interaction page provided in the embodiment of the present application. The interactive page displays a "text" control 10H, an "audio" control 20H, a "video" control 30H, a "picture" control 40H, and a "share" control 50H. When the "share" control 50H is triggered, the social application program obtains the first Marker, and may further obtain UGC published by the first user for the first Marker. The social application program can further perform gray level processing on the obtained first Marker to obtain the first Marker with lower definition.
Step 430, the social application program sends the first Marker and the first user's ID to the social application server.
And 431, generating dynamic information by the social application server according to the first Marker and the ID of the first user, generating a shooting control associated with the dynamic information, and issuing the dynamic information.
For example, the social application server generates dynamic information as shown in fig. 4B in which the ID of the first user is "Dan" and the shooting control is the "AR camera" control 10B. In the page shown in fig. 4B, the "AR camera" control 10B is embedded in the first Marker. For example, the social application server receives a dynamic information update request of a fifth user, and the social application server sends the dynamic information to a social application program of the fifth user for display.
For example, the dynamic information is displayed as a page as shown in fig. 4B, in which in addition to the first Marker, "AR camera" control 10B, information indicating a shooting area, such as "bottle bottom", "BLANC", "1664", and the like, may be further displayed. When the fifth user sees the bottle in the dynamic information and is in front of the same bottle, the AR camera control 10B can be triggered, the shooting position of the camera of the mobile phone is determined according to the information, and the bottle is shot to obtain a third Marker, so that the image features of the shot third Marker are likely to be closer to those of the second Marker in the Marker server, so as to ensure that the third Marker is more likely to be matched with the second Marker in the Marker server.
In an implementation of the present application, after the first Marker is obtained in step 402, the social application program may further extract contour information of a shooting object in the first Marker, send the contour information to the social application server, and correspondingly store the dynamic information and the contour information. After the fifth user triggers the "AR camera" control 10B associated with the dynamic information, the social application server invokes the AR camera and sends the profile information to the AR camera, the AR camera generates a floating layer, and the profile of the photographic object is displayed in the floating layer according to the profile information. The fifth user can align the contour with the same shooting object to shoot to obtain the third Marker.
Fig. 5 is a schematic structural diagram of an apparatus for implementing information interaction according to an embodiment of the present disclosure. As shown in fig. 5, the apparatus includes:
a receiving module 501, configured to receive a first image obtained by an application client and sent by the application client, and an identifier ID of a first user logged in the application client;
a matching module 502, configured to search the stored image set for a second image matching the image feature of the first image;
the processing module 503 is configured to, when a second image matched with the image feature of the first image is found, send the ID of the second image and the ID of the first user to a user original content UGC server, so that the UGC server obtains, according to the ID of the first user, the ID of a second user in a social relationship chain corresponding to the ID of the first user from a social application server, obtains, according to the ID of the second image and the ID of the second user, UGC published to the second image by a third user, and sends the UGC and the ID of the third user publishing the UGC to the application client for display.
In an embodiment of the present application, the apparatus further comprises: a creating module 504, configured to, when a second image matching the image feature of the first image is not found, save the first image in the image set and assign an ID to the first image.
In an embodiment of the present application, the matching module 502 is further configured to search the stored image set for the second image with a difference between the value of the image feature and the value of the image feature of the first image being smaller than a preset threshold.
Fig. 6 is a schematic structural diagram of an apparatus for implementing information interaction according to an embodiment of the present application, as shown in fig. 6, the apparatus includes:
an obtaining module 601, configured to obtain a first image in response to an operation of a first user on an image obtaining function in a first page;
a sending module 602, configured to send the first image and the identifier ID of the first user to an image server, so that the image server searches for a second image matching with the image feature of the first image in a stored image set, send the ID of the second image and the ID of the first user to a user original content UGC server, so that the UGC server obtains, according to the ID of the first user, an ID of a second user in a social relationship chain corresponding to the ID of the first user from a social application server, and obtains, according to the ID of the second image and the ID of the second user, first UGC published to the second image by a third user;
a display module 603, configured to obtain the first UGC and the ID of the third user from the UGC server, generate a second page, and display the first image, the ID of the third user, and the first UGC in the second page.
In an embodiment of the application, the obtaining module 601 is further configured to invoke a camera to capture the first image in response to an operation of the first user on a capture control in the first page.
In an embodiment of the application, the obtaining module 601 is further configured to invoke a camera to capture the first image in response to a triggering operation of the first user on a capture control associated with the first dynamic information displayed in the first page.
In an embodiment of the present application, the obtaining module 601 is further configured to select and load the first image from a local area or a network in response to an operation of the first user on an image selection control in the first page.
In an embodiment of the present application, the apparatus further comprises: a first UGC creating module 604, configured to obtain the ID of the second image from the UGC server, generate a third page in response to an operation of the first user on a content creating control on the second page, receive a second UGC input by the first user in the third page, send the second UGC, the ID of the first user, and the ID of the second image to the UGC server, so that the UGC server allocates a second UGC ID to the second UGC, and correspondingly store the ID of the second image, the ID of the first user, the ID of the second UGC, and the second UGC.
In an embodiment of the present application, the apparatus further comprises:
and a dynamic information publishing module 605, configured to, in response to an operation of the first user on the sharing control in the third page, obtain the first image, and send the first image and the ID of the first user to the social application server, so that the social application server generates and publishes second dynamic information according to the ID of the first user and the first image.
In an embodiment of the present application, the apparatus further comprises:
a second UGC creating module 606, configured to receive a third UGC of the first dynamic information, which is input by the first user in the first page, and send, in response to an operation on a publishing control in the first page, the ID of the first user, the ID of the first dynamic information, and the third UGC to the social application server, so that the social application server generates comment information according to the ID of the first dynamic information, the ID of the first user, and the third UGC and publishes the comment information.
In an embodiment of the application, the display module 603 is further configured to obtain the avatars of the third users from the social application server according to the IDs of the third users, sequentially show the avatars of the third users in the second page, generate, in response to an operation of the first user on the avatar of one of the third users, an UGC obtaining request including the ID of the third user whose avatar is operated, send the UGC obtaining request to the UGC server, and obtain and display, from the UGC server, the UGC issued by the third user whose avatar is operated.
Fig. 7 is a schematic structural diagram of an apparatus for implementing information interaction in the embodiment of the present application. As shown in fig. 7, the apparatus includes: a processor 701, a non-volatile computer-readable memory 702, a display unit 703, a network communication interface 704. These components communicate over a bus 705.
In this embodiment, memory 702 has stored therein a number of program modules, including an operating system 706, a network communication module 707, and application programs 708.
The processor 701 can read various modules (not shown in the figure) included in the application program in the memory 702 to execute various functional applications of information interaction and data processing. The processor 701 in this embodiment may be one or multiple processors, and may be a CPU, a processing unit/module, an ASIC, a logic module, a programmable gate array, or the like.
Operating system 706 may be: windows operating system, Android operating system, or apple iPhone OS operating system.
The application programs 708 may include: and an information interaction module 709. The information interaction module 709 may include a set of computer-executable instructions 709-1 and corresponding metadata and heuristics 709-2 formed by the modules of the apparatus shown in FIG. 5. These sets of computer-executable instructions may be executed by the processor 701 and perform the functions of the method shown in FIG. 1B or the apparatus for performing information interaction shown in FIG. 5.
In this embodiment, the network communication interface 704 cooperates with the network communication module 707 to complete the transceiving of various network signals of the information interaction apparatus.
The display unit 703 has a display panel for inputting and displaying related information.
Fig. 8 is a schematic structural diagram of an apparatus for implementing information interaction in the embodiment of the present application. As shown in fig. 8, the apparatus includes: a processor 801, a non-volatile computer-readable memory 802, a display unit 803, a network communication interface 804. These components communicate over a bus 805.
In this embodiment, the memory 802 stores a plurality of program modules, including an operating system 806, a network communication module 807, and an application program 808.
The processor 801 may read various modules (not shown in the figure) included in the application program in the memory 802 to perform various functional applications of information interaction and data processing. The processor 801 in this embodiment may be one or more, and may be a CPU, a processing unit/module, an ASIC, a logic module, a programmable gate array, or the like.
Operating system 806 may be, among other things: windows operating system, Android operating system, or apple iPhone OS operating system.
The application programs 808 may include: and an information interaction module 809. The information interaction module 809 may include a set of computer-executable instructions 809-1 and corresponding metadata and heuristics 809-2 formed by the modules of the apparatus shown in fig. 6. These sets of computer-executable instructions may be executed by the processor 801 to perform the method of FIG. 3A or the apparatus of FIG. 6 for interacting with information.
In this embodiment, the network communication interface 804 cooperates with the network communication module 807 to complete the transceiving of various network signals of the information interaction apparatus.
The display unit 803 has a display panel for inputting and displaying related information.
An embodiment of the present application provides a system for implementing information interaction, including:
the application client is used for responding to the operation of a first user on the image acquisition function in the first page and acquiring a first image;
the image server is used for acquiring the first image and the identifier ID of the first user logged in the application client from the application client, searching a second image matched with the image characteristics of the first image in the stored image set, and acquiring the ID of the second image;
the user original content UGC server is used for acquiring the ID of the second image and the ID of the first user from the image server, sending a request carrying the ID of the first user to a social application server to request for acquiring the ID of the second user in a social relation chain corresponding to the ID of the first user, acquiring first UGC issued to the second image by a third user according to the ID of the second image and the ID of the second user, and sending the first UGC and the ID of the third user to the application client;
the social application server is used for determining the ID of a second user in the social relation chain corresponding to the ID of the first user by using the ID of the first user and sending the ID of the second user to the UGC server;
the application client is further configured to generate a second page, and the first image, the ID of the third user, and the first UGC are displayed in the second page.
In an embodiment of the application, the application client is further configured to obtain an ID of the second image from the UGC server, generate a third page in response to an operation of the first user on a content creation control on the second page, receive a second UGC input by the first user in the third page, and send the second UGC, the ID of the first user, and the ID of the second image to the UGC server;
the UGC server is further configured to assign a second UGC ID to the second UGC, and store the ID of the second image, the ID of the first user, the second UGC ID, and the second UGC correspondingly.
In an embodiment of the application, the application client is further configured to obtain the first image in response to an operation of the first user on a sharing control in the third page, and send the first image and the ID of the first user to the social application server;
and the social application server is further used for generating and publishing second dynamic information according to the ID of the first user and the first image.
In addition, functional modules in the embodiments of the present application may be integrated into one processing unit, or each module may exist alone physically, or two or more modules are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The functional modules of the embodiments may be located in one terminal or network node, or may be distributed over a plurality of terminals or network nodes.
In addition, each of the embodiments of the present application can be realized by a data processing program executed by, for example, a computer. It is clear that a data processing program constitutes the present application. Further, a data processing program, which is generally stored in one storage medium, is executed by directly reading the program out of the storage medium or by installing or copying the program into a storage device (such as a hard disk and/or a memory) of the data processing device. Such a storage medium therefore also constitutes the present application. The storage medium may use any type of recording means, such as a paper storage medium (e.g., paper tape, etc.), a magnetic storage medium (e.g., a flexible disk, a hard disk, a flash memory, etc.), an optical storage medium (e.g., a CD-ROM, etc.), a magneto-optical storage medium (e.g., an MO, etc.), and the like.
The present application further provides a computer-readable storage medium having stored thereon computer-readable instructions for execution by at least one processor for performing any one of the embodiments of the methods described herein.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (35)

1. A method for realizing information interaction is characterized by comprising the following steps:
receiving a first image acquired by an application client and an identifier ID of a first user logged in the application client, wherein the first image is sent by the application client;
searching a second image matched with the image characteristics of the first image in the stored image set;
when a second image matched with the image characteristics of the first image is found, sending the ID of the second image and the ID of the first user to a user original content UGC server, so that the UGC server obtains the ID of a second user in a social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, obtains UGC published by a third user on the second image according to the ID of the second image and the ID of the second user, and sends the UGC and the ID of the third user publishing the UGC to the application client for display, wherein the third user is the second user publishing UGC to the second image in the social relation chain corresponding to the first user; the application client acquires the ID of the third user from the UGC server, acquires the corresponding head portrait of the third user from the social application server according to the ID of the third user, generates a second page, displays the first image in the second page, sequentially displays the head portrait of the third user, and responds to the operation of the head portrait of one of the third users to generate a UGC acquisition request containing the ID of the third user with the operated head portrait; sending the UGC acquisition request to the UGC server; and obtaining and displaying UGC issued by a third user with the operated head portrait from the UGC server.
2. The method of claim 1, wherein when a second image matching the image feature of the first image is not found, further comprising:
saving the first image in the image set and assigning an ID to the first image.
3. The method of claim 1, wherein finding a second image in the stored set of images that matches the image feature of the first image comprises:
searching the stored image set for the second image with the difference value between the value of the image feature and the value of the image feature of the first image being smaller than a preset threshold value.
4. A method for realizing information interaction is characterized by comprising the following steps:
responding to the operation of a first user on an image acquisition function in a first page, and acquiring a first image;
sending the first image and the identifier ID of the first user to an image server, so that the image server searches a second image matched with the image feature of the first image in a stored image set, sending the ID of the second image and the ID of the first user to a user original content UGC server, so that the UGC server obtains the ID of a second user in a social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, and obtains a first UGC issued to the second image by a third user according to the ID of the second image and the ID of the second user, wherein the third user is the second user who issued UGC to the second image in the social relation chain corresponding to the first user;
acquiring the IDs of the third users from the UGC server, generating a second page, and acquiring the head portraits corresponding to the third users from the social application server according to the IDs of the third users; displaying the first image in the second page, and sequentially displaying the head portraits of the plurality of third users in the second page; responding to the operation of the first user on the head portrait of one of the third users, and generating a UGC acquisition request containing the ID of the third user with the operated head portrait; sending the UGC acquisition request to the UGC server; and obtaining and displaying UGC issued by a third user with the operated head portrait from the UGC server.
5. The method of claim 4, wherein the acquiring the first image comprises:
and responding to the operation of the first user on the shooting control in the first page, and calling a camera device to shoot the first image.
6. The method of claim 4, wherein the acquiring the first image comprises:
and calling a camera device to shoot the first image in response to the operation of the first user on the shooting control associated with the first dynamic information displayed in the first page.
7. The method of claim 4, wherein the acquiring the first image comprises:
and responding to the operation of the first user on an image selection control in the first page, and selecting and loading the first image from a local place or a network.
8. The method of claim 4, further comprising:
acquiring an ID of the second image from the UGC server;
responding to the operation of the first user on the content creation control on the second page, and generating a third page;
receiving second UGC input by the first user in the third page;
and sending the second UGC, the ID of the first user and the ID of the second image to the UGC server, so that the UGC server allocates a second UGCID for the second UGC and correspondingly stores the ID of the second image, the ID of the first user, the ID of the second UGC and the second UGC.
9. The method of claim 8, further comprising:
responding to the operation of the first user on the sharing control in the third page, and acquiring the first image;
and sending the first image and the ID of the first user to the social application server so that the social application server generates and publishes second dynamic information according to the ID of the first user and the first image.
10. The method of claim 6, wherein the first page is further used for showing information for indicating a shooting area associated with the first dynamic information, so that the first user can determine the shooting area of a shooting device according to the information for indicating the shooting area.
11. The method of claim 6, further comprising:
receiving third UGC (user generated content) input by the first user in the first page and made on the first dynamic information;
and responding to the operation of a publishing control in the first page, and sending the ID of the first user, the ID of the first dynamic information and the third UGC to the social application server, so that the social application server generates comment information according to the ID of the first dynamic information, the ID of the first user and the third UGC and publishes the comment information.
12. An apparatus for implementing information interaction, comprising:
the device comprises a receiving module, a judging module and a display module, wherein the receiving module is used for receiving a first image which is sent by an application client and acquired by the application client and an identifier ID of a first user logged in the application client;
the matching module is used for searching a second image matched with the image characteristics of the first image in the stored image set;
the processing module is used for sending the ID of the second image and the ID of the first user to a user original content UGC server when the second image matched with the image feature of the first image is found, so that the UGC server obtains the ID of the second user in a social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, obtains UGC published by a third user on the second image according to the ID of the second image and the ID of the second user, and sends the UGC and the ID of the third user publishing the UGC to the application client side for displaying, wherein the third user is the second user publishing the UGC on the second image in the social relation chain corresponding to the first user; the application client acquires the ID of the third user from the UGC server, acquires the corresponding avatar of the third user from the social application server according to the ID of the third user, generates a second page, displays the first image in the second page, sequentially displays the avatar of the third user in the second page, and responds to the operation of the avatar of one of the third users to generate a UGC acquisition request containing the ID of the third user with the operated avatar; sending the UGC acquisition request to the UGC server; and obtaining and displaying UGC issued by a third user with the operated head portrait from the UGC server.
13. The apparatus of claim 12, further comprising: and the creating module is used for saving the first image in the image set and distributing an ID (identity) to the first image when a second image matched with the image characteristics of the first image is not found.
14. The apparatus of claim 12, wherein the matching module is further configured to find the second image in the stored set of images for which a difference between a value of an image feature and a value of an image feature of the first image is less than a preset threshold.
15. An apparatus for implementing information interaction, comprising:
the acquisition module is used for responding to the operation of a first user on the image acquisition function in the first page and acquiring a first image;
a sending module, configured to send the first image and the identifier ID of the first user to an image server, so that the image server searches for a second image matching with an image feature of the first image in a stored image set, send the ID of the second image and the ID of the first user to a user original content UGC server, so that the UGC server obtains, according to the ID of the first user, an ID of a second user in a social relationship chain corresponding to the ID of the first user from a social application server, and obtains, according to the ID of the second image and the ID of the second user, a first UGC issued to the second image by a third user, where the third user is the second user who issued UGC to the second image in the social relationship chain corresponding to the first user;
the display module is used for acquiring the ID of the third user from the UGC server, generating a second page, and acquiring the head portrait corresponding to the third user from the social application server according to the IDs of the third users; displaying the first image in the second page, and sequentially displaying the head portraits of the plurality of third users in the second page; responding to the operation of the first user on the head portrait of one of the third users, and generating a UGC acquisition request containing the ID of the third user with the operated head portrait; sending the UGC acquisition request to the UGC server; and obtaining and displaying UGC issued by a third user with the operated head portrait from the UGC server.
16. The apparatus of claim 15, wherein the obtaining module is further configured to invoke a camera to capture the first image in response to the first user operating a capture control in the first page.
17. The apparatus according to claim 15, wherein the obtaining module is further configured to invoke a camera to capture the first image in response to a triggering operation of a capture control associated with the first dynamic information displayed in the first page by the first user.
18. The apparatus of claim 15, wherein the obtaining module is further configured to select and load the first image from local or network in response to the first user operating an image selection control in the first page.
19. The apparatus of claim 15, further comprising: the first UGC creating module is used for acquiring the ID of the second image from the UGC server, responding to the operation of the first user on the content creating control on the second page, generating a third page, receiving the second UGC input by the first user in the third page, sending the second UGC, the ID of the first user and the ID of the second image to the UGC server, so that the UGC server allocates a second UGC ID for the second UGC, and correspondingly storing the ID of the second image, the ID of the first user, the ID of the second UGC and the second UGC.
20. The apparatus of claim 19, further comprising:
and the dynamic information publishing module is used for responding to the operation of the first user on the sharing control in the third page, acquiring the first image, and sending the first image and the ID of the first user to the social application server so that the social application server can generate and publish second dynamic information according to the ID of the first user and the first image.
21. The apparatus of claim 17, further comprising:
and the second UGC creating module is used for receiving third UGC of the first dynamic information, which is input by the first user in the first page, and responding to the operation of a publishing control in the first page, and sending the ID of the first user, the ID of the first dynamic information and the third UGC to the social application server so that the social application server generates comment information according to the ID of the first dynamic information, the ID of the first user and the third UGC and publishes the comment information.
22. An apparatus for implementing information interaction, comprising: a processor and a memory;
the processor executes machine-readable storage execution stored in the memory to:
responding to the operation of a first user on an image acquisition function in a first page, and acquiring a first image;
sending the first image and the identifier ID of the first user to an image server, so that the image server searches a second image matched with the image feature of the first image in a stored image set, sending the ID of the second image and the ID of the first user to a user original content UGC server, so that the UGC server obtains the ID of a second user in a social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, and obtains a first UGC issued to the second image by a third user according to the ID of the second image and the ID of the second user, wherein the third user is the second user who issued UGC to the second image in the social relation chain corresponding to the first user;
acquiring the IDs of the third users from the UGC server, generating a second page, and acquiring the head portraits corresponding to the third users from the social application server according to the IDs of the third users; displaying the first image in the second page, and sequentially displaying the head portraits of the plurality of third users in the second page; responding to the operation of the first user on the head portrait of one of the third users, and generating a UGC acquisition request containing the ID of the third user with the operated head portrait; sending the UGC acquisition request to the UGC server; and obtaining and displaying UGC issued by a third user with the operated head portrait from the UGC server.
23. The apparatus of claim 22, wherein the processor further executes machine-readable storage stored in the memory to:
and acquiring the ID of the second image from the UGC server, responding to the operation of the first user on a content creation control on the second page, generating a third page, receiving a second UGC input by the first user in the third page, and sending the second UGC, the ID of the first user and the ID of the second image to the UGC server, so that the UGC server allocates a second UGC ID for the second UGC and correspondingly stores the ID of the second image, the ID of the first user, the ID of the second UGC and the second UGC.
24. A method for realizing information interaction is applied to an image server, and the image server comprises the following steps: a processor and a memory, the processor executing machine-readable stored instructions stored in the memory to:
receiving a first image acquired by an application client and an identifier ID of a first user logged in the application client, wherein the first image is sent by the application client;
searching a second image matched with the image characteristics of the first image in the stored image set;
when a second image matched with the image characteristics of the first image is found, sending the ID of the second image and the ID of the first user to a user original content UGC server, so that the UGC server obtains the ID of a second user in a social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, obtains UGC published by a third user on the second image according to the ID of the second image and the ID of the second user, and sends the UGC and the ID of the third user publishing the UGC to the application client for display, wherein the third user is the second user publishing UGC to the second image in the social relation chain corresponding to the first user; the application client acquires the ID of the third user from the UGC server, acquires the corresponding avatar of the third user from the social application server according to the ID of the third user, generates a second page, displays the first image in the second page, sequentially displays the avatar of the third user in the second page, and responds to the operation of the avatar of one of the third users to generate a UGC acquisition request containing the ID of the third user with the operated avatar; sending the UGC acquisition request to the UGC server; and obtaining and displaying UGC issued by a third user with the operated head portrait from the UGC server.
25. The method of claim 24, wherein the processor further executes machine-readable storage instructions stored in the memory to:
and when a second image matched with the image characteristics of the first image is not found, saving the first image in the image set and distributing an ID (identity) to the first image.
26. The method of claim 24, wherein the processor further executes machine-readable storage instructions stored in the memory to:
searching the stored image set for the second image with the difference value between the value of the image feature and the value of the image feature of the first image being smaller than a preset threshold value.
27. A method for realizing information interaction is applied to an application client, and the application client comprises: a processor and a memory, the processor executing machine-readable stored instructions stored in the memory to:
responding to the operation of a first user on an image acquisition function in a first page, and acquiring a first image;
sending the first image and the identifier ID of the first user to an image server, so that the image server searches a second image matched with the image feature of the first image in a stored image set, sending the ID of the second image and the ID of the first user to a user original content UGC server, so that the UGC server obtains the ID of a second user in a social relation chain corresponding to the ID of the first user from a social application server according to the ID of the first user, and obtains a first UGC issued to the second image by a third user according to the ID of the second image and the ID of the second user, wherein the third user is the second user who issued UGC to the second image in the social relation chain corresponding to the first user;
acquiring the IDs of the third users from the UGC server, generating a second page, and acquiring the head portraits corresponding to the third users from the social application server according to the IDs of the third users; displaying the first image in the second page, and sequentially displaying the head portraits of the plurality of third users in the second page; responding to the operation of the first user on the head portrait of one of the third users, and generating a UGC acquisition request containing the ID of the third user with the operated head portrait; sending the UGC acquisition request to the UGC server; and obtaining and displaying UGC issued by a third user with the operated head portrait from the UGC server.
28. The method of claim 27, wherein the processor further executes machine-readable storage instructions stored in the memory to:
and responding to the operation of the first user on the shooting control in the first page, and calling a camera device to shoot the first image.
29. The method of claim 27, wherein the processor further executes machine-readable storage instructions stored in the memory to:
and calling a camera device to shoot the first image in response to the operation of the first user on the shooting control associated with the first dynamic information displayed in the first page.
30. The method of claim 27, wherein the processor further executes machine-readable storage instructions stored in the memory to:
and responding to the operation of the first user on an image selection control in the first page, and selecting and loading the first image from a local place or a network.
31. The method of claim 27, wherein the processor further executes machine-readable storage instructions stored in the memory to:
acquiring an ID of the second image from the UGC server;
responding to the operation of the first user on the content creation control on the second page, and generating a third page;
receiving second UGC input by the first user in the third page;
and sending the second UGC, the ID of the first user and the ID of the second image to the UGC server, so that the UGC server allocates a second UGCID for the second UGC and correspondingly stores the ID of the second image, the ID of the first user, the ID of the second UGC and the second UGC.
32. The method of claim 31, wherein the processor further executes machine-readable storage instructions stored in the memory to:
responding to the operation of the first user on the sharing control in the third page, and acquiring the first image;
and sending the first image and the ID of the first user to the social application server so that the social application server generates and publishes second dynamic information according to the ID of the first user and the first image.
33. The method of claim 29, wherein the first page is further used for presenting information indicating a shooting area associated with the first dynamic information, so that the first user determines a shooting area of a shooting device according to the information indicating a shooting area.
34. The method of claim 29, wherein the processor further executes machine-readable storage instructions stored in the memory to:
receiving third UGC (user generated content) input by the first user in the first page and made on the first dynamic information;
and responding to the operation of a publishing control in the first page, and sending the ID of the first user, the ID of the first dynamic information and the third UGC to the social application server, so that the social application server generates comment information according to the ID of the first dynamic information, the ID of the first user and the third UGC and publishes the comment information.
35. A computer-readable storage medium having computer-readable instructions stored thereon for execution by at least one processor to perform the method of any one of claims 1-11, 24-34.
CN201780054913.3A 2017-11-22 2017-11-22 Method and device for realizing information interaction Active CN110089076B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/112266 WO2019100234A1 (en) 2017-11-22 2017-11-22 Method and apparatus for implementing information interaction

Publications (2)

Publication Number Publication Date
CN110089076A CN110089076A (en) 2019-08-02
CN110089076B true CN110089076B (en) 2021-04-09

Family

ID=66631325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780054913.3A Active CN110089076B (en) 2017-11-22 2017-11-22 Method and device for realizing information interaction

Country Status (2)

Country Link
CN (1) CN110089076B (en)
WO (1) WO2019100234A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111651436A (en) * 2020-04-14 2020-09-11 海南车智易通信息技术有限公司 Processing method and system for user generated content and computing equipment
CN113835582B (en) * 2021-09-27 2024-03-15 青岛海信移动通信技术有限公司 Terminal equipment, information display method and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101098241A (en) * 2006-06-26 2008-01-02 腾讯科技(深圳)有限公司 Method and system for implementing virtual image
CN102571633A (en) * 2012-01-09 2012-07-11 华为技术有限公司 Method for demonstrating user state, demonstration terminal and server
CN105323252A (en) * 2015-11-16 2016-02-10 上海璟世数字科技有限公司 Method and system for realizing interaction based on augmented reality technology and terminal
CN106169975A (en) * 2016-08-29 2016-11-30 财付通支付科技有限公司 Business transmission method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7219148B2 (en) * 2003-03-03 2007-05-15 Microsoft Corporation Feedback loop for spam prevention
CN103870485B (en) * 2012-12-13 2017-04-26 华为终端有限公司 Method and device for achieving augmented reality application
US10360642B2 (en) * 2014-02-18 2019-07-23 Google Llc Global comments for a media item
CN104917667A (en) * 2015-05-26 2015-09-16 腾讯科技(深圳)有限公司 Multimedia-information-based interaction method, apparatus and system
CN105468142A (en) * 2015-11-16 2016-04-06 上海璟世数字科技有限公司 Interaction method and system based on augmented reality technique, and terminal
CN105912234B (en) * 2016-04-06 2019-01-15 腾讯科技(深圳)有限公司 The exchange method and device of virtual scene

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101098241A (en) * 2006-06-26 2008-01-02 腾讯科技(深圳)有限公司 Method and system for implementing virtual image
CN102571633A (en) * 2012-01-09 2012-07-11 华为技术有限公司 Method for demonstrating user state, demonstration terminal and server
CN105323252A (en) * 2015-11-16 2016-02-10 上海璟世数字科技有限公司 Method and system for realizing interaction based on augmented reality technology and terminal
CN106169975A (en) * 2016-08-29 2016-11-30 财付通支付科技有限公司 Business transmission method and device

Also Published As

Publication number Publication date
CN110089076A (en) 2019-08-02
WO2019100234A1 (en) 2019-05-31

Similar Documents

Publication Publication Date Title
US11314568B2 (en) Message processing method and apparatus, storage medium, and computer device
WO2019105337A1 (en) Video-based face recognition method, apparatus, device, medium and program
CN111767554B (en) Screen sharing method and device, storage medium and electronic equipment
JP7397094B2 (en) Resource configuration method, resource configuration device, computer equipment, and computer program
US20140232748A1 (en) Device, method and computer readable recording medium for operating the same
CN113112614A (en) Interaction method and device based on augmented reality
KR20210050166A (en) Method for recognizing and utilizing user face based on profile picture in chat room created using group album
CN113268212A (en) Screen projection method and device, storage medium and electronic equipment
US11829809B2 (en) Method, system, and non-transitory computer-readable record medium for managing event messages and system for presenting conversation thread
RU2768526C2 (en) Real handwriting presence for real-time collaboration
CN110089076B (en) Method and device for realizing information interaction
CN112843681B (en) Virtual scene control method and device, electronic equipment and storage medium
CN113971307A (en) Incidence relation generation method and device, storage medium and electronic equipment
CN111580883B (en) Application program starting method, device, computer system and medium
CN114092608B (en) Expression processing method and device, computer readable storage medium and electronic equipment
CN113144606B (en) Skill triggering method of virtual object and related equipment
CN111638819B (en) Comment display method, device, readable storage medium and system
US20180300301A1 (en) Enhanced inking capabilities for content creation applications
CN114937188A (en) Information identification method, device, equipment and medium for sharing screenshot by user
US20210110202A1 (en) 3d object detection from calibrated 2d images
CN107168978B (en) Message display method and device
KR102647904B1 (en) Method, system, and computer program for classify place review images based on deep learning
US20230386109A1 (en) Content layout systems and processes
US20230260309A1 (en) Table extraction from image-based documents
US20230315829A1 (en) Programming verification templates visually

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant