CN109510752B - Information display method and device - Google Patents

Information display method and device Download PDF

Info

Publication number
CN109510752B
CN109510752B CN201710831862.2A CN201710831862A CN109510752B CN 109510752 B CN109510752 B CN 109510752B CN 201710831862 A CN201710831862 A CN 201710831862A CN 109510752 B CN109510752 B CN 109510752B
Authority
CN
China
Prior art keywords
picture
information
subject
server
framing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710831862.2A
Other languages
Chinese (zh)
Other versions
CN109510752A (en
Inventor
陈宇
方佳玮
王强宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201710831862.2A priority Critical patent/CN109510752B/en
Priority to TW107119259A priority patent/TW201915721A/en
Priority to PCT/CN2018/103957 priority patent/WO2019052374A1/en
Publication of CN109510752A publication Critical patent/CN109510752A/en
Application granted granted Critical
Publication of CN109510752B publication Critical patent/CN109510752B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/222Monitoring or handling of messages using geographical location information, e.g. messages transmitted or received in proximity of a certain spot or area
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

One or more embodiments of the present specification provide an information display method and apparatus, where the method may include: the method comprises the steps that a first device obtains a shooting picture or a framing picture; the first device is subject binding information contained in the shooting picture or the framing picture; and when the obtained shooting picture or framing picture contains the shot object and the position of the second equipment is matched with the geographic position corresponding to the shot object, the second equipment displays the information bound to the shot object.

Description

Information display method and device
Technical Field
One or more embodiments of the present disclosure relate to the field of information display technologies, and in particular, to an information display method and apparatus.
Background
More and more convenient user Social behaviors can be realized through Application (APP) based on Social Network Service (SNS) among users.
In the related art, the social behavior of the user may actually be understood as information interaction between users, for example, each user may run a social APP on the used user equipment, that is, information interaction between users may be achieved based on the social APP.
Disclosure of Invention
In view of this, one or more embodiments of the present disclosure provide an information displaying method and apparatus.
To achieve the above object, one or more embodiments of the present disclosure provide the following technical solutions:
according to a first aspect of one or more embodiments of the present specification, there is provided an information presentation method, including:
the method comprises the steps that a first device obtains a shooting picture or a framing picture;
the first device is subject binding information contained in the shooting picture or the framing picture;
and when the obtained shooting picture or framing picture contains the shot object and the position of the second equipment is matched with the geographic position corresponding to the shot object, the second equipment displays the information bound to the shot object.
According to a second aspect of one or more embodiments of the present specification, there is provided an information presentation method including:
the method comprises the steps that a first device obtains a shooting picture or a framing picture;
the first device is subject binding information contained in the shooting picture or the framing picture;
the first device uploads a first binding relationship between the shot object and the information to a server, so that the server determines the information bound to the shot object according to the first binding relationship, and the information is provided for user equipment which carries out shooting or framing on the shot object at a geographical position corresponding to the shot object subsequently.
According to a third aspect of one or more embodiments of the present specification, there is provided an information presentation method including:
the second equipment acquires a shooting picture or a framing picture;
when the shooting picture or the view finding picture contains a shot object, and the user equipment which previously shoots or finds the shot object binds information for the shot object, and the geographic position of the second equipment when the second equipment acquires the shooting picture or the view finding picture is matched with the geographic position corresponding to the shot object, the second equipment displays the information.
According to a fourth aspect of one or more embodiments of the present specification, there is provided an information presentation method including:
the method comprises the steps that a server obtains a first binding relationship between a shot object and information uploaded by first equipment, wherein the shot object is located in a first shooting picture or a first framing picture obtained by the first equipment;
the server acquires a second shooting picture or a second viewing picture uploaded by any equipment;
when the second shooting picture or the second viewing picture contains the shot object and the geographic position where the arbitrary device acquires the second shooting picture or the second viewing picture is matched with the geographic position corresponding to the shot object, the server determines information bound to the shot object according to the first binding relationship and provides the information to the arbitrary device so that the arbitrary device displays the information bound to the shot object.
According to a fifth aspect of one or more embodiments of the present specification, there is provided an information presentation apparatus comprising:
a first acquisition unit that causes a first device to acquire a captured picture or a framing picture;
an information binding unit that causes the first device to bind information for a subject contained in the captured picture or the finder picture;
the first uploading unit enables the first device to upload a first binding relationship between the shot object and the information to a server, and enables the server to determine the information bound to the shot object according to the first binding relationship so as to provide the information to user equipment which carries out shooting or framing on the shot object at a geographical position corresponding to the shot object subsequently.
According to a sixth aspect of one or more embodiments of the present specification, there is provided an information presentation apparatus comprising:
a second acquisition unit that causes a second device to acquire a shot picture or a framing picture;
and the information display unit is used for displaying the information by the second equipment when the shooting picture or the view finding picture contains a shot object, and the information is bound to the shot object by the user equipment which previously shoots or finds the shot object, and the geographic position of the second equipment when the second equipment acquires the shooting picture or the view finding picture is matched with the geographic position corresponding to the shot object.
According to a seventh aspect of one or more embodiments of the present specification, there is provided an information presentation apparatus, comprising:
a first relation acquisition unit, configured to enable a server to acquire a first binding relation between a subject and information uploaded by a first device, where the subject is located in a first shooting picture or a first viewing picture acquired by the first device;
the picture acquisition unit enables the server to acquire a second shooting picture or a second framing picture uploaded by any equipment;
and an information determining unit, configured to, when the second shooting picture or the second finder picture includes the photographic subject and a geographic position where the arbitrary device acquires the second shooting picture or the second finder picture matches a geographic position corresponding to the photographic subject, enable the server to determine, according to the first binding relationship, information bound to the photographic subject, and provide the information to the arbitrary device, so that the arbitrary device displays the information bound to the photographic subject.
Drawings
Fig. 1 is a schematic diagram of an architecture of an information presentation system according to an exemplary embodiment.
Fig. 2 is a flowchart of an information presentation method according to an exemplary embodiment.
Fig. 3 is a flowchart of an information presentation method based on a first device side according to an exemplary embodiment.
Fig. 4 is a flowchart of an information presentation method based on a second device side according to an exemplary embodiment.
Fig. 5 is a flowchart of a server-side based information presentation method according to an exemplary embodiment.
Fig. 6 is a schematic diagram of a framing implemented by the client 1 according to an exemplary embodiment.
Fig. 7 is a diagram illustrating a subject in a viewfinder frame according to an exemplary embodiment.
Fig. 8 is a schematic diagram of triggering an information binding operation for a photographic subject according to an exemplary embodiment.
Fig. 9 is a schematic diagram of inputting information bound to a photographic subject according to an exemplary embodiment.
Fig. 10 is a diagram illustrating information bound to a photographic subject according to an exemplary embodiment.
Fig. 11 is a diagram illustrating information that all users bind to a photographic subject according to an exemplary embodiment.
Fig. 12 is a schematic diagram for distinctively presenting information bound to a photographic subject according to an exemplary embodiment.
Fig. 13 is a schematic diagram illustrating user information for performing a binding operation according to an exemplary embodiment.
FIG. 14 is a schematic diagram illustrating social operations implemented based on exposed user information according to an exemplary embodiment.
Fig. 15 is a schematic diagram of a framing implemented by the client 2 according to an exemplary embodiment.
Fig. 16 is a diagram illustrating information bound to a photographic subject according to an exemplary embodiment.
Fig. 17 is a diagram illustrating another example of information bound to a photographic subject according to an exemplary embodiment.
Fig. 18 is a schematic diagram illustrating yet another example of information bound to a photographic subject according to an exemplary embodiment.
Fig. 19 is a schematic structural diagram of an electronic device according to an exemplary embodiment.
Fig. 20 is a block diagram of an information presentation apparatus based on a first device side according to an exemplary embodiment.
Fig. 21 is a block diagram of an information presentation apparatus based on a second device side according to an exemplary embodiment.
Fig. 22 is a block diagram of a server-side based information presentation apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Fig. 1 is a schematic diagram of an architecture of an information presentation system according to an exemplary embodiment. As shown in fig. 1, the system may include a server 11, a network 12, a number of electronic devices, such as a user device 13 like a cell phone 131, smart glasses 132, etc., a user device 14 like a cell phone 141, smart glasses 142, etc.
The server 11 may be a physical server comprising an independent host, or the server 11 may be a virtual server carried by a cluster of hosts, or the server 11 may be a cloud server. During the operation, the server 11 may run a server-side program of an application, configured as a server of the application, to implement a related service function of the application, for example, the server running on the server 11 cooperates with a client running on the user equipment 13-14 to implement the information presentation scheme of this specification.
The user devices 13-14 may employ, in addition to the above-described cell phones, smart glasses, and may also use electronic devices such as the following types: tablet devices, notebook computers, Personal Digital Assistants (PDAs), wearable devices (such as smart bands, smart watches, and the like, in addition to smart glasses), and the like, which are not limited by one or more embodiments of the present disclosure. In the operation process, the electronic device may run a program on a client side of an application, configured as a client of the application, to implement a relevant service function of the application, for example, the electronic device implements the information presentation scheme of this specification by the running client alone, or the client cooperates with the server operated by the server 11 to implement the information presentation scheme of this specification. The application program on the client side can be pre-installed on the electronic device, so that the client of the application can be started and run on the electronic device; of course, when an online "client" such as HTML5 technology is employed, the client can be obtained and run without installing a corresponding application on the electronic device.
And the network 12 for interaction between the user devices 13-14 and the server 11 may comprise various types of wired or wireless networks. In one embodiment, the Network 12 may include the Public Switched Telephone Network (PSTN) and the Internet. Meanwhile, electronic devices such as the user devices 13 to 14 can also perform communication interaction through the network 12.
The applications running on the server 11 and the user devices 13-14 may include any application program, such as an Instant Messaging (IM) application, and the like, and one or more embodiments of the present disclosure are not limited thereto.
Fig. 2 is a flowchart of an information presentation method according to an exemplary embodiment. As shown in fig. 2, the method may include:
in step 202, the first device acquires a shot picture or a viewfinder picture.
In an embodiment, the captured image may include a frame image in a photo or video captured by the electronic device (e.g., the first device or other electronic device) via a camera. In an embodiment, the framing picture may include a picture that a camera of the electronic device appears within a framing range of the camera when a photographing operation has not been performed.
Step 204, the first device is the subject binding information contained in the shooting picture or the framing picture.
In an embodiment, a subject included in a shooting picture or a framing picture can be identified, and the subject may include an arbitrary subject; for example, the state may include static or dynamic objects, and the type may include people, natural scenes, or artifacts, and the like, which are not limited in this specification.
In one embodiment, the subject may be directly identified by the first device; or after the first device uploads the shooting picture or the framing picture to the server, the server identifies the shot object.
In an embodiment, the first device may bind any type of information to the subject, for example, the information may include text, an image, a video, a document, device information of the first device, network information of a network where the first device is located, user information of a logged-in user on the first device, and the like, which is not limited in this specification.
In step 206, when the obtained shooting picture or the obtained view finding picture contains the shot object, the second device displays information bound to the shot object.
In an embodiment, the first device may upload information bound to the subject to a server, and when the subject is included in a shooting picture or a framing picture acquired by the second device, the information bound by the first device is acquired from the server by the second device, so that the information is displayed on the second device.
In one embodiment, the subject may be directly identified by the second device; or after the shooting picture or the framing picture is uploaded to the server by the second device, the server identifies the shot object.
In an embodiment, by identifying a subject in a shooting picture or a framing picture, even if the first device and the second device adopt different shooting or framing angles, distances and the like, the corresponding subject can be accurately identified, thereby ensuring that the second device can acquire information bound to the subject.
In an embodiment, the second device may be another device different from the first device, such as different electronic devices respectively belonging to different users (represented as user accounts logged in the corresponding electronic devices), so that information transfer and interaction between different users can be realized through the information bound to the photographic subject. In another embodiment, the second device and the first device may be the same device, or although the second device and the first device are different devices, the user account of the same user is logged in, so that the user can view the information bound to the photographic subject at any time, so as to edit and delete the information.
In an embodiment, the subject may have a corresponding geographic location to mark a location where the subject is located, so that the second device may display the information bound to the subject when the obtained shooting picture or framing picture includes the subject and the location of the second device matches the geographic location bound to the subject. Based on the matching of the geographic positions, even when the photographic subjects with the same or similar appearances exist in different places, the photographic subjects in the shooting picture or the framing picture obtained by the second device can be quickly identified so as to show the information that the first device is bound to the photographic subjects.
In one embodiment, a corresponding geographic location may be bound for the subject by the first device. For example, the geographical position where the first device is located when the shooting picture or the finder picture is acquired may be adopted, or an arbitrary geographical position specified by the user of the first device may be adopted.
In one embodiment, the corresponding geographic location may be bound for the subject by the server. For example, the geographic positions corresponding to the objects may be configured in the server in advance; accordingly, when the second device acquires the shooting picture or the framing picture, the second device may send its own geographic location to the server, and the server determines the geographic location and the object included in the shooting picture or the framing picture to determine whether the object is the object shot by the first device.
For another example, when the first device acquires a shooting picture or a viewing picture, the first device may determine a geographic position of the subject, so that the server may obtain a mapping relationship between image information of the subject and the geographic position; when the first device or other devices subsequently upload the image information for the subject again, the server may update the image information to the mapping relationship, so that the mapping relationship gradually evolves into a relationship between the image set of the subject (including the image information for the subject uploaded by the first device or other devices for multiple times, which may represent image features of the subject at different angles) and the geographic location; correspondingly, when the second device acquires the shooting picture or the framing picture, the second device may send its geographic position to the server, and the server may determine, according to the mapping relationship corresponding to each object, the geographic position and the object included in the shooting picture or the framing picture, so as to determine whether the object is the object shot by the first device.
Fig. 3 is a flowchart of an information presentation method based on a first device side according to an exemplary embodiment. As shown in fig. 3, the method may include:
in step 302, the first device obtains a shot or a viewfinder.
In an embodiment, the captured image may include a frame image in a photo or video captured by the electronic device (e.g., the first device or other electronic device) via a camera. In an embodiment, the framing picture may include a picture that a camera of the electronic device appears within a framing range of the camera when a photographing operation has not been performed.
Step 304, the first device binding information for the shot object contained in the shooting picture or the framing picture.
In an embodiment, a subject included in a shooting picture or a framing picture can be identified, and the subject may include an arbitrary subject; for example, the state may include static or dynamic objects, and the type may include people, natural scenes, or artifacts, and the like, which are not limited in this specification.
In one embodiment, the subject may be directly identified by the first device; or after the first device uploads the shooting picture or the framing picture to the server, the server identifies the shot object.
In an embodiment, the first device may bind any type of information to the subject, for example, the information may include text, an image, a video, a document, device information of the first device, network information of a network where the first device is located, user information of a logged-in user on the first device, and the like, which is not limited in this specification.
In an embodiment, when the shooting picture or the framing picture contains a plurality of subjects, the first device may expose the plurality of subjects, and determine a selected subject in response to a user selection operation to bind the information to the selected subject. However, only a single subject may be selected, or a plurality of subjects may be simultaneously selected, which is not limited in this specification.
Step 306, the first device uploads a first binding relationship between the subject and the information to a server, so that the server determines the information bound to the subject according to the first binding relationship, and provides the information to a user device which performs subsequent shooting or framing on the subject.
In an embodiment, the subject may have a corresponding geographic location, such that information bound to the subject is provided to a user device that subsequently takes or frames the subject at the geographic location. Based on the matching of the geographic positions, when the shot objects with the same or similar appearances exist in different places, on one hand, the counterfeiting cost of the shot objects can be improved, and on the other hand, the user equipment for shooting or framing the shot objects can be quickly identified so as to show the information that the first equipment is bound to the shot objects.
In one embodiment, a corresponding geographic location may be bound for the subject by the first device. For example, the geographical position where the first device is located when the shooting picture or the finder picture is acquired may be adopted, or an arbitrary geographical position specified by the user of the first device may be adopted.
In one embodiment, the corresponding geographic location may be bound for the subject by the server. For example, the geographic positions corresponding to the objects may be configured in the server in advance; accordingly, when the second device acquires the shooting picture or the framing picture, the second device may send its own geographic location to the server, and the server determines the geographic location and the object included in the shooting picture or the framing picture to determine whether the object is the object shot by the first device.
For another example, when the first device acquires a shooting picture or a viewing picture, the first device may determine a geographic position of the subject, so that the server may obtain a mapping relationship between image information of the subject and the geographic position; when the first device or other devices subsequently upload the image information for the subject again, the server may update the image information to the mapping relationship, so that the mapping relationship gradually evolves into a relationship between the image set of the subject (including the image information for the subject uploaded by the first device or other devices for multiple times, which may represent image features of the subject at different angles) and the geographic location; correspondingly, when the second device acquires the shooting picture or the framing picture, the second device may send its geographic position to the server, and the server may determine, according to the mapping relationship corresponding to each object, the geographic position and the object included in the shooting picture or the framing picture, so as to determine whether the object is the object shot by the first device.
In one embodiment, the first device may bind other subjects included in the captured picture or the finder picture to the subject; then, the first device uploads a second binding relationship between the photographic subject and the other photographic subjects to the server, so that the server determines the other photographic subjects bound to the photographic subject according to the second binding relationship, and information bound to the photographic subject is provided to a user device which subsequently performs shooting or framing on the photographic subject and the other photographic subjects at the same time. Based on the binding between the subject and other subjects, on one hand, the counterfeiting cost of the subject can be increased, and on the other hand, the user equipment which performs shooting or framing on the subject and other subjects at the same time can be quickly identified based on the second binding relationship so as to show the information that the first equipment is bound to the subject.
In one embodiment, it is assumed that the user device that subsequently takes a picture or frames the subject is the second device. The second device may be another device different from the first device, such as different electronic devices respectively belonging to different users (represented as user accounts logged in the corresponding electronic devices), so that information transfer and interaction between different users can be achieved through information bound to the photographic subject. In another embodiment, the second device and the first device may be the same device, or although the second device and the first device are different devices, the user account of the same user is logged in, so that the user can view the information bound to the photographic subject at any time, so as to edit and delete the information.
Fig. 4 is a flowchart of an information presentation method based on a second device side according to an exemplary embodiment. As shown in fig. 4, the method may include:
in step 402, the second device acquires a shot or a viewfinder.
In one embodiment, the captured image may include a frame image of a photo or video captured by the electronic device (e.g., the second device or another electronic device) via a camera. In an embodiment, the framing picture may include a picture that a camera of the electronic device appears within a framing range of the camera when a photographing operation has not been performed.
Step 404, when the shooting picture or the framing picture contains a shot object and the user equipment which previously shoots or frames the shot object binds information to the shot object, the second equipment displays the information.
In one embodiment, the subject may be directly identified by the second device; or after the shooting picture or the framing picture is uploaded to the server by the second device, the server identifies the shot object.
In an embodiment, the shot object is uploaded to a server by the second device, so that the server determines information bound to the shot object according to a first set of pre-recorded binding relationships between the shot objects and the information. In other words, assuming that the first device binds the identified subject with the corresponding information during the process of shooting or framing the subject in advance, the first device may generate a first binding relationship between the subject and the corresponding information, and record the first binding relationship in the first binding relationship set; similarly, other devices may also bind arbitrary information for an arbitrary subject and record a corresponding binding relationship in the first binding relationship set. Then, based on the shot object obtained by the second device in the shooting or framing process and the first binding relationship set, a first binding relationship corresponding to the shot object can be determined, and information bound to the shot object is further determined; wherein the same subject may be bound to multiple pieces of information, which the second device may acquire and present.
In an embodiment, assuming that the first device binds information to the object to be photographed acquired by the second device in advance, the second device may be other devices different from the first device, such as different electronic devices respectively belonging to different users (represented as user accounts logged in on the corresponding electronic devices), so that information transfer and interaction between different users may be achieved through the information bound to the object to be photographed. In another embodiment, the second device and the first device may be the same device, or although the second device and the first device are different devices, the user account of the same user is logged in, so that the user can view the information bound to the photographic subject at any time, so as to edit and delete the information.
In an embodiment, the second device may determine a geographic location where the photographic picture or the viewing picture was acquired, wherein the subject is further bound to the geographic location. Based on the matching of the geographic positions, when the shot objects with the same or similar appearances exist in different places, on one hand, the counterfeiting cost of the shot objects can be improved, and on the other hand, the user equipment for shooting or framing the shot objects can be quickly identified so as to show the information that the first equipment is bound to the shot objects.
In one embodiment, a corresponding geographic location may be bound for the subject by the first device. For example, the geographical position where the first device is located when the shooting picture or the finder picture is acquired may be adopted, or an arbitrary geographical position specified by the user of the first device may be adopted.
In one embodiment, the corresponding geographic location may be bound for the subject by the server. For example, the geographic positions corresponding to the objects may be configured in the server in advance; accordingly, when the second device acquires the shooting picture or the framing picture, the second device may send its own geographic location to the server, and the server determines the geographic location and the object included in the shooting picture or the framing picture to determine whether the object is the object shot by the first device.
For another example, when the first device acquires a shooting picture or a viewing picture, the first device may determine a geographic position of the subject, so that the server may obtain a mapping relationship between image information of the subject and the geographic position; when the first device or other devices subsequently upload the image information for the subject again, the server may update the image information to the mapping relationship, so that the mapping relationship gradually evolves into a relationship between the image set of the subject (including the image information for the subject uploaded by the first device or other devices for multiple times, which may represent image features of the subject at different angles) and the geographic location; correspondingly, when the second device acquires the shooting picture or the framing picture, the second device may send its geographic position to the server, and the server may determine, according to the mapping relationship corresponding to each object, the geographic position and the object included in the shooting picture or the framing picture, so as to determine whether the object is the object shot by the first device.
In an embodiment, the second device may determine other subjects included in the captured picture or the finder picture, wherein the subjects are also bound to the other subjects. For example, when the first device previously acquired the subject, other subjects acquired by the first device may be bound for the subject. Based on the binding between the subject and other subjects, on one hand, the counterfeiting cost of the subject can be increased, and on the other hand, the user equipment which performs shooting or framing on the subject and other subjects at the same time can be quickly identified based on the second binding relationship so as to show the information that the first equipment is bound to the subject.
In an embodiment, the second device may present the subject in association with the information in the photographing screen or the finder screen. For example, the second device may present information at a display area (such as within or around the display area) of the subject in the shooting or framing screen. In other embodiments, the second device may present the information in any manner, for example, present the information separately in a manner of an independent interface, a floating window, or the like, and present the information and the subject in an associated manner in a manner of an independent interface, a floating window, or the like.
In an embodiment, the second device may mark the subject in the captured picture or the viewfinder picture to distinguish from other picture contents in the captured picture or the viewfinder picture, so as to facilitate a user to perform associated viewing on the subject and the bound information.
Fig. 5 is a flowchart of a server-side based information presentation method according to an exemplary embodiment. As shown in fig. 5, the method may include:
step 502, a server acquires a first binding relationship between a subject uploaded by a first device and information, where the subject is located in a first shooting picture or a first framing picture acquired by the first device.
In an embodiment, the shot picture may include a frame image in a photo or video shot by the electronic device through a camera. In an embodiment, the framing picture may include a picture that a camera of the electronic device appears within a framing range of the camera when a photographing operation has not been performed. For example, the first shot or first framed view is from a camera of the first device.
In one embodiment, the subject may be directly identified by the first device; or after the first shooting picture or the first framing picture is uploaded to the server by the first device, the server identifies the shot object.
Step 504, the server obtains a second shooting picture or a second viewing picture uploaded by any device.
Step 506, when the second shot picture or the second viewing picture contains the shot object, the server determines the information bound to the shot object according to the first binding relationship, and provides the information to the arbitrary device, so that the arbitrary device displays the information bound to the shot object.
In an embodiment, the arbitrary device includes the first device or a second device different from the first device. The first device and the second device may log in user accounts of the same user, or may log in user accounts of different users.
In an embodiment, the server may determine that the arbitrary device acquires the geographic position of the second captured picture or the second captured picture, so as to provide information bound to the subject to the arbitrary device when the subject is included in the second captured picture or the second captured picture and the geographic position matches the geographic position corresponding to the subject. Based on the matching of the geographic positions, when the shot objects with the same or similar appearances exist in different places, on one hand, the counterfeiting cost of the shot objects can be improved, and on the other hand, the user equipment for shooting or framing the shot objects can be quickly identified so as to show the information that the first equipment is bound to the shot objects.
In one embodiment, a corresponding geographic location may be bound for the subject by the first device. For example, the geographical position where the first device is located when the shooting picture or the finder picture is acquired may be adopted, or an arbitrary geographical position specified by the user of the first device may be adopted.
In one embodiment, the corresponding geographic location may be bound for the subject by the server. For example, the geographic positions corresponding to the objects may be configured in the server in advance; accordingly, when the second device acquires the shooting picture or the framing picture, the second device may send its own geographic location to the server, and the server determines the geographic location and the object included in the shooting picture or the framing picture to determine whether the object is the object shot by the first device.
For another example, when the first device acquires a shooting picture or a viewing picture, the first device may determine a geographic position of the subject, so that the server may obtain a mapping relationship between image information of the subject and the geographic position; when the first device or other devices subsequently upload the image information for the subject again, the server may update the image information to the mapping relationship, so that the mapping relationship gradually evolves into a relationship between the image set of the subject (including the image information for the subject uploaded by the first device or other devices for multiple times, which may represent image features of the subject at different angles) and the geographic location; correspondingly, when the second device acquires the shooting picture or the framing picture, the second device may send its geographic position to the server, and the server may determine, according to the mapping relationship corresponding to each object, the geographic position and the object included in the shooting picture or the framing picture, so as to determine whether the object is the object shot by the first device.
In an embodiment, the server may obtain a second binding relationship between the subject uploaded by the first device and a specific subject, where the specific subject is another subject except the subject in the first shooting picture or the first viewing picture; then, the server may determine another subject other than the subject in the second photographing screen or the second finder screen to provide information bound to the subject to the arbitrary device when the specific subject is included in the second photographing screen or the second finder screen. Based on the binding between the subject and other subjects, on one hand, the counterfeiting cost of the subject can be increased, and on the other hand, the user equipment which performs shooting or framing on the subject and other subjects at the same time can be quickly identified based on the second binding relationship so as to show the information that the first equipment is bound to the subject.
As can be seen from the foregoing embodiments, by binding the information to the object, even if the first device and the second device are not pre-associated (for example, users of the two devices are not in a friend relationship), the first device and the second device do not need to be in the same geographic range at the same time (for example, the two devices do not need to be in the vicinity of the object at the same time, and there is no special requirement in time series), that is, information transfer and interaction can be implemented for the object, and an application scenario in which information interaction is implemented between different devices (or between different users using the devices) is expanded.
For ease of understanding, the following description will be made of one or more embodiments of the present disclosure, using any application as an example. It is assumed that the server 11 runs a server of any of the above applications, the mobile phone 131 runs the client 1 of any of the above applications, the client 1 logs in the user account of the user a, and the mobile phone 141 runs the client 2 of any of the above applications, and the client 2 logs in the user account of the user B. Through the technical scheme of the description, the user A and the user B can realize information interaction based on the shot object.
Fig. 6 is a schematic diagram of a framing implemented by the client 1 according to an exemplary embodiment. After the user a starts the client 1 on the mobile phone 131, the shooting function supported by the client 1 may be started, and the client 1 may implement functions such as framing or shooting by calling a camera mounted on the mobile phone 131. As shown in fig. 6, it is assumed that the client 1 shows a viewfinder screen 600 on the cellular phone 131, and the viewfinder screen 600 is used to show a screen within the view range of the camera; among them, several subjects, such as a photograph 601, a pendant 602, and the like shown in fig. 6, may be included in the finder screen 600.
It should be noted that: although the "view screen" is used in the present embodiment, the present embodiment may also use a shooting screen obtained by performing shooting operation with a camera, and the present specification does not limit this. When the 'view finding picture' is adopted, on one hand, information display can be realized without user operation, on the other hand, virtual information and a real shot object can be displayed in a correlated mode, and Augmented Reality (AR) aiming at the shot object is realized, so that the relevance and the interactivity between the virtual information and the real shot object are greatly enhanced.
Fig. 7 is a diagram illustrating a subject in a viewfinder frame according to an exemplary embodiment. The client 1 can identify the identified subject by identifying the subject in the viewfinder 600; for example, as shown in fig. 7, for the recognized photographic subjects such as a photo 601 and a pendant 602, a wire frame 701 and a wire frame 702 may be displayed on the outer sides of the photo 601 and the pendant 602, respectively, so that the user a can determine the photographic subject that the client 1 has recognized. In other embodiments, the client 1 may or may not mark the identified subject in other ways, and this specification does not limit this.
In an embodiment, the client 1 can identify the subject included in the viewfinder screen 600 by itself. For example, the client 1 may have a recognition model, a recognition library, and the like built therein for the subject, and recognize the finder screen 600 based on the recognition model, the recognition library, and the like, that is, may specify the subject included in the finder screen 600.
In an embodiment, the client 1 may upload the viewfinder 600 to the server, and the server running in the server 11 processes the viewfinder 600 to identify the subject contained in the viewfinder 600. Therefore, the client 1 can know the subject included in the finder screen 600 from the server. The client 1 can continuously upload the framing picture 600 acquired by the camera to the server, so as to ensure that the shot object contained in the framing picture 600 is quickly marked and avoid affecting the operation of the user a; alternatively, the client 1 may upload the viewfinder screen 600 only when the screen content of the viewfinder screen 600 has a large change by performing screen change monitoring on the viewfinder screen 600, so as to reduce the amount of uploaded data.
In an embodiment, the user a may select a display object or a display area in the viewfinder screen 600 to identify the selected display object or display area by the client 1 in the above manner or other manners, without identifying other screen contents.
Fig. 8 is a schematic diagram of triggering an information binding operation for a photographic subject according to an exemplary embodiment. As shown in fig. 8, the user a may select any subject in the viewfinder screen 600, such as selecting a photo 601, and may then trigger an information binding operation for the photo 601 by triggering a message button 603 in the viewfinder screen 600, so that the user a may bind information for the photo 601.
Fig. 9 is a schematic diagram of inputting information bound to a photographic subject according to an exemplary embodiment. As shown in fig. 9, when the user a wishes to bind information to the photo 601, a message box 900 may be shown in the associated area of the photo 601, so that the information bound by the user a to the photo 601 may be shown in the message box 900. Assuming that the user a binds information of text type to the photo 601, the mobile phone 131 may display a virtual keyboard as shown in fig. 9, so that the user a may input corresponding information in the message box 900 accordingly. In other embodiments, the user a may bind any other type of information to the photo 601, such as photos, videos, documents, and the like, which is not limited by the present description.
Fig. 10 is a diagram illustrating information bound to a photographic subject according to an exemplary embodiment. As shown in fig. 10, after the user a completes the input, information 1000 bound to the photo 601 may be formed in the message box 900. User a may bind a greater amount of information to photograph 601 and this description is not intended to be limiting. User a may also edit the information 1000 bound to the photo 601 itself, such as modifying the content of the information 1000 or deleting the information 1000.
Fig. 11 is a diagram illustrating information that all users bind to a photographic subject according to an exemplary embodiment. As shown in fig. 11, when another user has performed an information binding operation on a photo 601 in advance, and a user a views the photo 601 through a client 1, in addition to information 1000 that the user a binds to the photo 601, information that another user binds to the photo 601, such as information 1101, information 1102, information 1103, and the like shown in fig. 11, may be displayed in a viewing screen 600, so that interaction between the user a and a message taker corresponding to these information 1101-1103 is realized based on the photo 601.
It should be noted that: the information 1000, the information 1101 to 1103, and the like illustrated in fig. 11 form a certain block on the subject such as the photo 601 and the pendant 602, and in order to obtain better visual experience, visual property adjustment may be performed on the information bound to the subject, for example, a translucent background, a lighter font color, and the like, which is not limited in this specification.
Fig. 12 is a schematic diagram for distinctively presenting information bound to a photographic subject according to an exemplary embodiment. For convenience of viewing, the information that the user a and other users are respectively bound to the photo 601 may be displayed in a distinguishing manner, so that the user a can distinguish the information. For example, as shown in fig. 12, the information 1000 bound by the user a may adopt a black background and a white font, and the information 1101 to 1103 bound by other users may adopt a white background and a black font. In other embodiments, other manners may be used for differential display, and the description is not limited thereto.
Fig. 13 is a schematic diagram illustrating user information for performing a binding operation according to an exemplary embodiment. In order to enhance the degree of interaction between users, information of the respective users who perform the binding operation may be shown in the finder screen 600 for the information bound to the subject. For example, as shown in fig. 13, when the user a binds the information 1000 for the photo 601, the avatar 1301 of the user a may be shown on the left side of the information 1000; similarly, on the left side of the information 1101-1103 bound to the photo 601, avatars 1302-1304 of respective users may be shown, respectively.
FIG. 14 is a schematic diagram illustrating social operations implemented based on exposed user information according to an exemplary embodiment. As shown in fig. 14, when it is detected that the avatar 1302 is triggered by the user a, a contact information interface 1400 may be shown on the mobile phone 131 for the user a to view information of "white" of the user to which the avatar 1302 belongs, such as name, contact address, and the like. Each user can select whether to open the personal information to other users, the item type of the personal information which can be opened and the like so as to meet the privacy requirements of the users. In an embodiment, the contact information interface 1400 may include social operation options for the respective users, such as "call," "add friends," etc., so that further social operations may be performed between the users.
Fig. 15 is a schematic diagram of a framing implemented by the client 2 according to an exemplary embodiment. After the user B starts the client 2 on the mobile phone 132, the shooting function supported by the client 2 may be started, and the client 2 may implement functions such as framing or shooting by calling a camera mounted on the mobile phone 132. As shown in fig. 15, it is assumed that the client 2 shows a viewfinder screen 1500 on the mobile phone 132, and the viewfinder screen 1500 is used to show a screen within the view range of the camera; the viewfinder screen 1500 may include several subjects, such as a photograph 1501 shown in fig. 15, a pendant 1502, and the like.
The client 2 may identify the subject included in the view screen 1500 by itself, or the client 2 may upload the view screen 1500 to the server and identify the subject included in the view screen 1500 by the server, which is similar to the client 1 and is not described herein again.
Fig. 16 is a diagram illustrating information bound to a photographic subject according to an exemplary embodiment. As shown in fig. 16, assuming that the user B selects a photograph 1501 in the viewfinder screen 1500, when there is information bound to the photograph 1501, the client 2 can present corresponding information 1601 to 1604 or the like in the viewfinder screen 1500 for the user B to view.
Assuming that the user a binds the corresponding information 1000 for the photo 601 through the embodiments shown in fig. 6-10, the client 1 may upload the binding relationship between the photo 601 and the information 1000 to the server. Similarly, when other users bind the corresponding information 1101-1103 to the photo 601, the clients used by these users will also upload the binding relationship between the photo 601 and the corresponding information 1101-1103 to the server. Then, when the user B selects the photo 1501 included in the view finding screen 1500 through the client 2, the server may compare the photo 1501 with each binding relationship recorded in advance, and if the binding relationship related to the photo 1501 is found, the information bound to the photo 1501 may be determined through the binding relationship, so as to be displayed to the user B. For example, when the server determines that the photo 1501 matches the photo 601, the binding relationship corresponding to the photo 601 uploaded by the user a in advance may be obtained, so that the information 1000 bound to the photo 601 is returned to the client 2 and shown as the information 1601 shown in fig. 16; similarly, the server may also return, to the client 2, the information bound to the photo 601 recorded by these binding relationships according to other binding relationships associated with the photo 601, and present the information as the information 1602 to 1604 shown in fig. 16. In the embodiments shown in fig. 11-13, the presentation process of the information 1101-1103 by the client 1 and the presentation process of the information 1601-1604 by the client 2 have the same principle, and may be referred to each other.
In an embodiment, by identifying the subject included in the viewfinder frame 1500, even if the shooting angles and shooting distances of the mobile phone 131 to the picture 601 and the mobile phone 132 to the picture 1501 are different, the server can still match the picture 1501 to the picture 601, which is helpful for improving the success rate of information display based on the subject.
In an embodiment, the user B may refer to the process of the user a using the embodiment shown in fig. 6 to 10 as the binding information 1000 of the photo 601 for the subject binding information such as the photo 1501 included in the viewfinder screen 1500 by triggering the message button 1503 included in the viewfinder screen 1500 shown in fig. 15 to 16, which is not described herein again.
Fig. 17 is a diagram illustrating another example of information bound to a photographic subject according to an exemplary embodiment. In an embodiment, the same or similar subjects may exist in different places, for example, a user B may obtain a viewfinder screen 1500 containing a photo 1501 and a pendant 1502 as shown in fig. 15 through the mobile phone 132 at a first place, and obtain a viewfinder screen 1700 containing a photo 1701 and a pendant 1702 as shown in fig. 17 through the mobile phone 132 at a second place, especially when the subjects are publicly sold articles, it is easy to make the photo 1501 and the pendant 1502 at the first place visually indistinguishable from the photo 1701 and the pendant 1702 at the second place.
In one embodiment, when binding information for a subject, a user needs to upload the geographical location of the user at the same time. For example, when the user a views the photo 601 and the pendant 602 through the mobile phone 131 at the first location, the user a uploads the geographic location 1 where the mobile phone 131 is located, so that the photo 601, the information 1000, and the geographic location 1 are bound and then uploaded to the server. Then, when the user B uploads the photo 1501 and the geographic position 2 where the mobile phone 132 is located to the server through the client 2, if the server determines that the uploaded photo 1501 matches the photo 601, the server further determines a matching condition between the geographic position 2 and the geographic position 1, and if the geographic position 2 matches the geographic position 1, the server may determine the photo 1501, that is, the photo 601, and return the information 1000 to the client 2 to be displayed on the view screen 1500.
When the user B uploads the photo 1701 and the geographic position 3 where the mobile phone 132 is located to the server through the client 2, if the server determines that the uploaded photo 1701 matches the photo 601, but the geographic position 3 does not match the geographic position 1, the client will not return the information 1000 to the client 2, but return the information 1703 recorded in the binding relationship matching both the photo 1701 and the geographic position 3.
In another embodiment, when the user binds information for any subject in the viewfinder, at least one other subject in the viewfinder needs to be bound at the same time. For example, when the user a views the photo 601 and the pendant 602 through the mobile phone 131 at the first location, the user a binds the information 1000 to the photo 601 and also binds the photo 601 and the pendant 602, and uploads the photo 601, the pendant 602, and the information 1000 to the server after binding. Then, when the user B uploads the photo 1501 and the pendant 1502 to the server through the client 2, if the server determines that the uploaded photo 1501 matches the photo 601, the server further determines a matching situation between the pendant 1502 and the pendant 602, and if the pendant 1502 matches the pendant 602 as well, the server may determine the photo 1501, that is, the photo 601, and return the information 1000 to the client 2 to be displayed on the viewfinder screen 1500.
When the user B performs framing through the client 2, it is assumed that a framing screen 1800 as shown in fig. 18 is obtained, that is, the subject in the framing screen 1800 includes a photograph 1801 and a decoration 1802. When the client 2 uploads the photo 1801 and the ornament 1802 to the server, if the server determines that the uploaded photo 1801 matches the photo 601 but the ornament 1802 does not match the pendant 602, the client will not return the information 1000 to the client 2, but return the information 1803 recorded in the binding relationship matching the photo 1801 and the ornament 1802.
In other embodiments, when the information is bound for the photographic subject, the geographic position and other photographic subjects may be bound for the photographic subject at the same time, so that the server returns the bound information when determining that the photographic subject, the geographic position, and other photographic subjects are all matched at the same time, so as to implement more accurate matching operation.
FIG. 19 is a schematic block diagram of an electronic device in accordance with an illustrative embodiment. Referring to fig. 19, at the hardware level, the electronic device includes a processor 1902, an internal bus 1904, a network interface 1906, a memory 1908 and a non-volatile memory 1910, although it may also include hardware required by other services. The processor 1902 reads a corresponding computer program from the non-volatile memory 1910 into the memory 1908 and then runs, forming an information presentation device on a logical level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
In an embodiment, referring to fig. 20, in a software implementation, the information display apparatus may include:
a first acquisition unit 2001 which causes the first device to acquire a shooting screen or a framing screen;
an information binding unit 2002 that causes the first device to bind information for a subject contained in the captured picture or the finder picture;
a first uploading unit 2003, which causes the first device to upload a first binding relationship between the subject and the information to a server, and causes the server to determine the information bound to the subject according to the first binding relationship, so as to provide the information to a user device which subsequently performs shooting or framing on the subject.
Alternatively to this, the first and second parts may,
the shot object is obtained by identifying the shooting picture or the framing picture by the first equipment;
or the shot object is obtained by the server through identification after the first device uploads the shooting picture or the framing picture to the server.
Optionally, the method further includes:
an object presentation unit 2004 that, when the shooting screen or the finder screen contains a plurality of subjects, causes the first device to present the plurality of subjects;
an object selection unit 2005 that causes the first device to determine a selected photographic subject in response to a user selection operation to bind the information to the selected photographic subject.
Optionally, the information bound to the subject includes at least one of: characters, images, videos, documents, device information of the first device, network information of a network where the first device is located, and user information of a logged-in user on the first device.
Optionally, the first uploading unit 2003 causes the server to determine, according to the first binding relationship, information bound to the subject to be provided to a user device that performs shooting or framing on the subject at a geographic location corresponding to the subject.
Optionally, the method further includes:
an object binding unit 2006 that causes the first device to bind the other subject contained in the captured picture or the finder picture to the subject;
a second uploading unit 2007, which causes the first device to upload a second binding relationship between the subject and the other subjects to the server, and causes the server to determine the other subjects bound to the subject according to the second binding relationship, so that the information bound to the subject is provided to a subsequent user device that performs shooting or framing on the subject and the other subjects at the same time.
In an embodiment, referring to fig. 21, in a software implementation, the information displaying apparatus may include:
a second acquisition unit 2101 that causes a second apparatus to acquire a photographic picture or a finder picture;
an information presentation unit 2102 configured to, when the shooting screen or the finder screen includes a subject and information is bound to the subject by a user device that has previously shot or framed the subject, cause the second device to present the information.
Alternatively to this, the first and second parts may,
the shot object is obtained by the second equipment through identifying the shooting picture or the framing picture;
or the shot object is obtained by the server through identification after the second device uploads the shooting picture or the framing picture to the server.
Optionally, the object to be shot is uploaded to a server by the second device, so that the server determines, according to a first set of binding relationships between each object to be shot and information recorded in advance, information bound to the object to be shot.
Optionally, the user equipment includes the second device or another device different from the second device.
Optionally, the information displaying unit 2102 is configured to display information when the shooting picture or the framing picture includes a subject, and information is bound to the subject by a user equipment that previously shoots or frames the subject, and a geographic location where the second device obtains the shooting picture or the framing picture matches a geographic location corresponding to the subject.
Optionally, the method further includes:
a first object determination unit 2103 that causes the second device to determine other subjects contained in the photographic picture or the finder picture, to which the subjects are also bound.
Optionally, the information display unit 2102 is specifically configured to:
and causing the second device to display the subject in association with the information in the shooting picture or the framing picture.
Optionally, the method further includes:
an object marking unit 2104 that causes the second device to mark the subject in the captured picture or the finder picture so as to be distinguished from other picture contents in the captured picture or the finder picture.
In an embodiment, referring to fig. 22, in a software implementation, the information display apparatus may include:
a first relationship acquisition unit 2201 that causes a server to acquire a first binding relationship between a subject uploaded by a first device and information, the subject being located in a first shooting picture or a first finder picture acquired by the first device;
a picture acquisition unit 2202 that causes the server to acquire a second shot picture or a second finder picture uploaded by any device;
an information determination unit 2203, when the subject is contained in the second captured picture or the second finder picture, causes the server to determine information bound to the subject according to the first binding relationship and provide the information bound to the subject to the arbitrary device for presentation by the arbitrary device.
Optionally, the arbitrary device includes the first device or a second device different from the first device.
Optionally, the information determining unit 2203 may enable the server to determine the information bound to the subject according to the first binding relationship and provide the information to the arbitrary device so that the arbitrary device displays the information bound to the subject when the second shooting picture or the second finder picture includes the subject and the geographic position of the arbitrary device when the arbitrary device acquires the second shooting picture or the second finder picture matches the geographic position corresponding to the subject.
Optionally, the method further includes:
a second relationship acquisition unit 2204 that causes the server to acquire a second binding relationship between the subject uploaded by the first device and a specific subject that is another subject except the subject in the first photographing screen or the first finder screen;
a second object determination unit 2205 that causes the server to determine a subject other than the subject in the second photographing screen or the second finder screen to provide information bound to the subject to the arbitrary device when the specific subject is contained in the second photographing screen or the second finder screen.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present description to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The above description is only for the purpose of illustrating the preferred embodiments of the one or more embodiments of the present disclosure, and is not intended to limit the scope of the one or more embodiments of the present disclosure, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the one or more embodiments of the present disclosure should be included in the scope of the one or more embodiments of the present disclosure.

Claims (25)

1. An information display method, comprising:
the method comprises the steps that a first device obtains a shooting picture or a framing picture;
the first device is subject binding information contained in the shooting picture or the framing picture, and binds at least one other subject contained in the shooting picture or the framing picture to the subject;
and the second equipment displays information bound to the shot object when the shot picture or the framing picture obtained by the second equipment contains the shot object and the other shot objects and the position of the second equipment is matched with the geographic position corresponding to the shot object.
2. An information display method, comprising:
the method comprises the steps that a first device obtains a shooting picture or a framing picture;
the first device is subject binding information contained in the shooting picture or the framing picture, and binds other subjects contained in the shooting picture or the framing picture to the subject;
the first device uploads a first binding relationship between the shot object and the information to a server, and the first device uploads a second binding relationship between the shot object and the other shot objects to the server, so that the server determines the other shot objects bound to the shot object according to the second binding relationship, and determines the information bound to the shot object according to the first binding relationship, so as to provide the information to user equipment which carries out shooting or framing on the shot object and the other shot objects at a geographical position corresponding to the shot object subsequently.
3. The method of claim 2,
the shot object is obtained by identifying the shooting picture or the framing picture by the first equipment;
or the shot object is obtained by the server through identification after the first device uploads the shooting picture or the framing picture to the server.
4. The method of claim 2, further comprising:
when the shooting picture or the framing picture contains a plurality of shot objects, the first equipment shows the shot objects;
the first device determines a selected subject in response to a user selection operation to bind the information to the selected subject.
5. The method of claim 2, wherein the information bound to the subject comprises at least one of: characters, images, videos, documents, device information of the first device, network information of a network where the first device is located, and user information of a logged-in user on the first device.
6. An information display method, comprising:
the second equipment acquires a shooting picture or a framing picture;
when the shooting picture or the view finding picture contains a shot object and other shot objects, wherein the shot object is also bound to the other shot objects, information is bound to the shot object by user equipment which carries out shooting or view finding on the shot object previously, and the second equipment displays the information when the geographic position where the second equipment acquires the shooting picture or the view finding picture is matched with the geographic position corresponding to the shot object.
7. The method of claim 6,
the shot object is obtained by the second equipment through identifying the shooting picture or the framing picture;
or the shot object is obtained by the server through identification after the second device uploads the shooting picture or the framing picture to the server.
8. The method of claim 6, wherein the subject is uploaded to a server by the second device, and wherein the information bound to the subject is determined by the server according to a first set of pre-recorded binding relationships between each subject and the information.
9. The method of claim 6, wherein the user equipment comprises the second device or another device distinct from the second device.
10. The method of claim 6, further comprising:
and the second equipment displays the shot object and the information in the shooting picture or the framing picture in a manner of being related.
11. The method of claim 6, further comprising:
the second device marks the subject in the shooting picture or the framing picture to distinguish other picture contents in the shooting picture or the framing picture.
12. An information display method, comprising:
the method comprises the steps that a server obtains a first binding relationship between a shot object and information uploaded by first equipment and a second binding relationship between the shot object and a specific shot object uploaded by the first equipment, the shot object is located in a first shooting picture or a first framing picture obtained by the first equipment, and the specific shot object is other shot objects except the shot object in the first shooting picture or the first framing picture;
the server acquires a second shooting picture or a second viewing picture uploaded by any equipment;
when the second shooting picture or the second framing picture contains the shot object and the specific shot object, and the geographic position where the arbitrary device acquires the second shooting picture or the second framing picture is matched with the geographic position corresponding to the shot object, the server determines information bound to the shot object according to the first binding relationship and provides the information to the arbitrary device so that the arbitrary device can display the information bound to the shot object.
13. The method of claim 12, wherein the arbitrary device comprises the first device or a second device distinct from the first device.
14. An information presentation device, comprising:
a first acquisition unit that causes a first device to acquire a captured picture or a framing picture;
an information binding unit that causes the first device to bind information for a subject contained in the captured picture or the finder picture;
an object binding unit that causes the first device to bind other subject included in the photographic picture or the finder picture to the subject;
a first uploading unit, which enables the first device to upload a first binding relationship between the subject and the information to a server, and enables the server to determine the information bound to the subject according to the first binding relationship, so as to provide the information to a user device which subsequently performs shooting or framing on the subject at a geographic position corresponding to the subject;
and a second uploading unit, which enables the first device to upload a second binding relationship between the shot object and the other shot objects to the server, and enables the server to determine the other shot objects bound to the shot object according to the second binding relationship, so that the information bound to the shot object is provided to a user device which performs shooting or framing on the shot object and the other shot objects at the same time.
15. The apparatus of claim 14,
the shot object is obtained by identifying the shooting picture or the framing picture by the first equipment;
or the shot object is obtained by the server through identification after the first device uploads the shooting picture or the framing picture to the server.
16. The apparatus of claim 14, further comprising:
an object display unit that, when the shooting picture or the framing picture contains a plurality of subjects, causes the first device to display the plurality of subjects;
an object selection unit that causes the first device to determine a selected photographic subject in response to a user selection operation to bind the information to the selected photographic subject.
17. The apparatus of claim 14, wherein the information bound to the subject comprises at least one of: characters, images, videos, documents, device information of the first device, network information of a network where the first device is located, and user information of a logged-in user on the first device.
18. An information presentation device, comprising:
a second acquisition unit that causes a second device to acquire a shot picture or a framing picture;
a first object determination unit that causes the second device to determine other subject included in the photographing screen or the finder screen, to which the subject is also bound;
and the information display unit is used for displaying the information by the second equipment when the shooting picture or the view finding picture contains a shot object, and the information is bound to the shot object by the user equipment which previously shoots or finds the shot object, and the geographic position of the second equipment when the second equipment acquires the shooting picture or the view finding picture is matched with the geographic position corresponding to the shot object.
19. The apparatus of claim 18,
the shot object is obtained by the second equipment through identifying the shooting picture or the framing picture;
or the shot object is obtained by the server through identification after the second device uploads the shooting picture or the framing picture to the server.
20. The apparatus of claim 18, wherein the subject is uploaded to a server by the second device, and wherein the server determines the information bound to the subject according to a first set of pre-recorded binding relationships between the respective subject and the information.
21. The apparatus of claim 18, wherein the user equipment comprises the second device or another device distinct from the second device.
22. The apparatus according to claim 18, wherein the information presentation unit is specifically configured to:
and causing the second device to display the subject in association with the information in the shooting picture or the framing picture.
23. The apparatus of claim 18, further comprising:
and an object marking unit which causes the second device to mark the subject in the shooting picture or the framing picture so as to distinguish the subject from other picture contents in the shooting picture or the framing picture.
24. An information presentation device, comprising:
a first relation acquisition unit, configured to enable a server to acquire a first binding relation between a subject and information uploaded by a first device, where the subject is located in a first shooting picture or a first viewing picture acquired by the first device;
a second relationship acquisition unit, configured to enable the server to acquire a second binding relationship between the subject uploaded by the first device and a specific subject, where the specific subject is another subject except the subject in the first shooting picture or the first viewing picture;
the picture acquisition unit enables the server to acquire a second shooting picture or a second framing picture uploaded by any equipment;
and an information determining unit, configured to, when the second captured picture or the second finder picture includes the captured object and the specific captured object, and the geographic position where the arbitrary device acquires the second captured picture or the second finder picture matches the geographic position corresponding to the captured object, enable the server to determine, according to the first binding relationship, information bound to the captured object, and provide the information to the arbitrary device, so that the arbitrary device displays the information bound to the captured object.
25. The apparatus of claim 24, wherein the arbitrary device comprises the first device or a second device distinct from the first device.
CN201710831862.2A 2017-09-15 2017-09-15 Information display method and device Active CN109510752B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201710831862.2A CN109510752B (en) 2017-09-15 2017-09-15 Information display method and device
TW107119259A TW201915721A (en) 2017-09-15 2018-06-05 Information display method and device
PCT/CN2018/103957 WO2019052374A1 (en) 2017-09-15 2018-09-04 Information display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710831862.2A CN109510752B (en) 2017-09-15 2017-09-15 Information display method and device

Publications (2)

Publication Number Publication Date
CN109510752A CN109510752A (en) 2019-03-22
CN109510752B true CN109510752B (en) 2021-11-02

Family

ID=65722400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710831862.2A Active CN109510752B (en) 2017-09-15 2017-09-15 Information display method and device

Country Status (3)

Country Link
CN (1) CN109510752B (en)
TW (1) TW201915721A (en)
WO (1) WO2019052374A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110275977B (en) * 2019-06-28 2023-04-21 北京百度网讯科技有限公司 Information display method and device
CN111159449B (en) * 2019-12-31 2024-04-16 维沃移动通信有限公司 Image display method and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489002A (en) * 2013-09-27 2014-01-01 广州中国科学院软件应用技术研究所 Reality augmenting method and system
CN103620600A (en) * 2011-05-13 2014-03-05 谷歌公司 Method and apparatus for enabling virtual tags
CN106130885A (en) * 2016-07-18 2016-11-16 吴东辉 Method and system based on image recognition opening relationships
US20170076505A1 (en) * 2015-06-24 2017-03-16 Microsoft Technology Licensing, Llc Virtual place-located anchor
CN106789953A (en) * 2016-11-30 2017-05-31 宇龙计算机通信科技(深圳)有限公司 A kind of data processing method and AR equipment
CN106982387A (en) * 2016-12-12 2017-07-25 阿里巴巴集团控股有限公司 It has been shown that, method for pushing and the device and barrage application system of barrage

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013130136A1 (en) * 2012-02-29 2013-09-06 Identive Group, Inc. Systems and methods for providing an augmented reality experience
US10216996B2 (en) * 2014-09-29 2019-02-26 Sony Interactive Entertainment Inc. Schemes for retrieving and associating content items with real-world objects using augmented reality and object recognition
CN105825568A (en) * 2016-03-16 2016-08-03 广东威创视讯科技股份有限公司 Portable intelligent interactive equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103620600A (en) * 2011-05-13 2014-03-05 谷歌公司 Method and apparatus for enabling virtual tags
CN103489002A (en) * 2013-09-27 2014-01-01 广州中国科学院软件应用技术研究所 Reality augmenting method and system
US20170076505A1 (en) * 2015-06-24 2017-03-16 Microsoft Technology Licensing, Llc Virtual place-located anchor
CN106130885A (en) * 2016-07-18 2016-11-16 吴东辉 Method and system based on image recognition opening relationships
CN106789953A (en) * 2016-11-30 2017-05-31 宇龙计算机通信科技(深圳)有限公司 A kind of data processing method and AR equipment
CN106982387A (en) * 2016-12-12 2017-07-25 阿里巴巴集团控股有限公司 It has been shown that, method for pushing and the device and barrage application system of barrage

Also Published As

Publication number Publication date
TW201915721A (en) 2019-04-16
WO2019052374A1 (en) 2019-03-21
CN109510752A (en) 2019-03-22

Similar Documents

Publication Publication Date Title
KR102355267B1 (en) Content collection navigation and autoforwarding
EP3713159B1 (en) Gallery of messages with a shared interest
US9619713B2 (en) Techniques for grouping images
JP6349031B2 (en) Method and apparatus for recognition and verification of objects represented in images
US10417799B2 (en) Systems and methods for generating and presenting publishable collections of related media content items
US10325372B2 (en) Intelligent auto-cropping of images
US10430456B2 (en) Automatic grouping based handling of similar photos
US20160306505A1 (en) Computer-implemented methods and systems for automatically creating and displaying instant presentations from selected visual content items
US11430211B1 (en) Method for creating and displaying social media content associated with real-world objects or phenomena using augmented reality
WO2019242542A1 (en) Screenshot processing method and device
WO2016154814A1 (en) Method and apparatus for displaying electronic picture, and mobile device
KR20220154261A (en) Media collection navigation with opt-out interstitial
US20160328868A1 (en) Systems and methods for generating and presenting publishable collections of related media content items
US11856255B2 (en) Selecting ads for a video within a messaging system
CN116324990A (en) Advertisement break in video within a messaging system
CN114365198A (en) Occlusion detection system
CN109510752B (en) Information display method and device
US10248306B1 (en) Systems and methods for end-users to link objects from images with digital content
CN115989523A (en) Vehicle identification system
CN116457814A (en) Context surfacing of collections
TW201717055A (en) Photo and video sharing
CN105488168B (en) Information processing method and electronic equipment
CN110036356A (en) Image procossing in VR system
US11645324B2 (en) Location-based timeline media content system
CN116349220A (en) Real-time video editing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant