WO2017167060A1 - 一种信息展示方法、装置及系统 - Google Patents

一种信息展示方法、装置及系统 Download PDF

Info

Publication number
WO2017167060A1
WO2017167060A1 PCT/CN2017/077400 CN2017077400W WO2017167060A1 WO 2017167060 A1 WO2017167060 A1 WO 2017167060A1 CN 2017077400 W CN2017077400 W CN 2017077400W WO 2017167060 A1 WO2017167060 A1 WO 2017167060A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
image
information
user information
identifier
Prior art date
Application number
PCT/CN2017/077400
Other languages
English (en)
French (fr)
Inventor
詹永胜
林锋
曹雷
晁笑
阮萍
Original Assignee
阿里巴巴集团控股有限公司
詹永胜
林锋
曹雷
晁笑
阮萍
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to MYPI2018703488A priority Critical patent/MY189680A/en
Priority to EP17773090.0A priority patent/EP3438849A4/en
Priority to CA3019224A priority patent/CA3019224C/en
Priority to MX2018011850A priority patent/MX2018011850A/es
Priority to KR1020187031087A priority patent/KR102293008B1/ko
Priority to SG11201808351QA priority patent/SG11201808351QA/en
Priority to AU2017243515A priority patent/AU2017243515C1/en
Priority to RU2018137829A priority patent/RU2735617C2/ru
Application filed by 阿里巴巴集团控股有限公司, 詹永胜, 林锋, 曹雷, 晁笑, 阮萍 filed Critical 阿里巴巴集团控股有限公司
Priority to JP2018551862A priority patent/JP6935421B2/ja
Priority to BR112018069970A priority patent/BR112018069970A2/pt
Publication of WO2017167060A1 publication Critical patent/WO2017167060A1/zh
Priority to US16/142,851 priority patent/US10691946B2/en
Priority to PH12018502093A priority patent/PH12018502093A1/en
Priority to US16/882,847 priority patent/US11036991B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/434Query formulation using image data, e.g. images, photos, pictures taken by a user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • G06F16/436Filtering based on additional data, e.g. user or group profiles using biological or physiological data of a human being, e.g. blood pressure, facial expression, gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • G06F16/7335Graphical querying, e.g. query-by-region, query-by-sketch, query-by-trajectory, GUIs for designating a person/face/object as a query predicate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/02Networking aspects
    • G09G2370/022Centralised management of display operation, e.g. in a server instead of locally
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns

Definitions

  • the present application relates to the field of computer technologies, and in particular, to an information display method, apparatus, and system.
  • online systems such as websites
  • users can not only perform servers with online systems.
  • Interaction it is also possible to implement business interaction, information sharing and other operations with other users who also use the online system.
  • any user who uses the online system (hereinafter referred to as the first user) can view the users of other users (hereinafter referred to as: the second user) in the scenario of the interaction between the users based on the online system.
  • Information such as: account name, evaluation of the second user by the different users, self-introduction, user tags, etc., thereby enabling the first user to find the desired second user for information sharing, business acquisition, attention Wait for interaction.
  • the first user wants to know the user information of the second user, he can only use the terminal to access the corresponding page (for example, the second user's personal homepage) for viewing. Obviously, such a method is cumbersome.
  • the user information of the second user viewed in the above manner is only the information registered by the second user in the online system.
  • the user information viewed by the first user is a virtual network information.
  • the actual second user cannot be determined in the actual environment.
  • the interactivity between the users can be increased.
  • the embodiments of the present application provide a method, device, device, and system for displaying information to solve problems in a scenario of interaction between users.
  • a corresponding AR graphic is displayed in an image acquired in real time, wherein the AR graphic follows the user image in real time.
  • a receiving module configured to receive an image that is collected and sent by the terminal in real time
  • a user image module configured to determine a user image included in the image
  • a user identification module configured to determine a user identifier corresponding to the user image
  • a user information module configured to acquire user information corresponding to the determined user identifier according to the pre-stored correspondence between each user identifier and the user information
  • An AR module configured to generate, according to the obtained user information, augmented reality AR graphics data corresponding to the user information, and feed back to the terminal, so that the terminal collects the real-time data according to the received AR graphics data.
  • a corresponding AR graphic is displayed in the image, wherein the AR graphic follows the user image in real time.
  • An acquisition module configured to collect an image in real time and send the image to a server, where the image includes a user image; so that the server determines a user image included in the image, and determines a user identifier corresponding to the user image, according to And pre-storing the corresponding relationship between the user identifier and the user information, obtaining the user information corresponding to the determined user identifier, and generating the AR graphics data corresponding to the user information according to the obtained user information, and feeding the data to the terminal;
  • a receiving module configured to receive AR graphic data corresponding to the user information fed back by the server
  • a display module configured to display, according to the AR image data, a corresponding AR graphic in an image acquired in real time, wherein the AR graphic follows the user image in real time.
  • An information display system is also provided in the embodiment of the present application, including:
  • the AR graphics data corresponding to the user image included in the image fed back by the information display device displays a corresponding AR graphic in the real-time collected image, wherein the AR graphic follows the user image in real time;
  • Information display device including:
  • the AR intelligent module is configured to receive an image acquired by the terminal in real time, and generate AR graphic data corresponding to the user information included in the image;
  • a verification module configured to determine, according to the received image, a user image included in the image, and determine a user identifier corresponding to the user image
  • the label management module is configured to obtain the user information corresponding to the determined user identifier according to the pre-stored correspondence between each user identifier and the user information.
  • the big data risk control module is configured to obtain historical data corresponding to the user identifier, and determine, according to the historical data, a matching degree between the user information and the historical data to be established, as the first user information to be established The credibility, the first credibility of the user information to be established, and the correspondence between the user information to be established and the user ID to be associated with the user relationship;
  • a mutual authentication module configured to determine, for each user information that is saved, another user that performs a specified operation on the user information, and for each other user, determine, according to the level of the other user, a specified operation performed by the other user to the user
  • the score generated by the information is determined according to the score determined for each other user, and the second credibility of the user information is determined and saved.
  • An embodiment of the present application provides an information display method, apparatus, and system, in which a terminal sends an image collected in real time to a server, so that the server will recognize the image of the person included in the image (ie, , the user image), and further determine the identity of the user, and the user information of the user, and then generate the AR graphic data corresponding to the user information, and return to the terminal, so that the terminal can display the image corresponding to the user image.
  • AR graphics The AR graphic reflects the user information of the user.
  • the method in the embodiment of the present application does not require the user to access the corresponding interface to view the user information of other users, and at the same time, the AR user can associate the virtual user information with the actual user. In the whole process, the user can view the user information of other users without any operation. Obviously, this way effectively improves the convenience of viewing user information in the scene of user interaction, and establishes a virtual The user information is substantially related to the actual user.
  • FIG. 2a is a schematic diagram of an AR helmet screen interface not displaying an AR graphic according to an embodiment of the present application
  • 2b-2d are schematic diagrams of an AR helmet screen interface displaying an AR graphic according to an embodiment of the present application
  • FIG. 3 is a schematic structural diagram of a cloud user information system and a terminal according to an embodiment of the present disclosure
  • FIGS. 4a and 4b are schematic diagrams of an AR glasses interface in different interaction scenarios according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an AR glasses interface in another scenario according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a server-side information display apparatus according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of an information display apparatus based on a terminal side according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an information display system according to an embodiment of the present application.
  • the user information of the second user eg, the user tag, credit rating, evaluation, etc. of the second user
  • the second user is to facilitate subsequent interaction between users.
  • the first user can only access the corresponding interface through the terminal to view the user information of the second user, and if the first user wants to perform offline interaction with the second user, the user information is only one type. Virtual information, so the first user cannot determine the second user individual through the user information of the second user.
  • the embodiment of the present application provides a method for displaying information, as shown in FIG. 1 .
  • FIG. 1 is an information display process provided by an embodiment of the present application, where the process specifically includes the following steps:
  • S101 Receive an image that is collected and sent by the terminal in real time.
  • the terminal includes an AR device (the AR device in the present application has an image collection function), wherein the AR device includes, but is not limited to, an AR glasses, an AR helmet, a mobile phone with an AR function, and AR-enabled computer, AR-enabled robot (in this application, a physical robot with acquisition and display functions).
  • the AR device includes, but is not limited to, an AR glasses, an AR helmet, a mobile phone with an AR function, and AR-enabled computer, AR-enabled robot (in this application, a physical robot with acquisition and display functions).
  • the terminal may be a device with an image acquisition function for acquiring images in real time, such as a mobile phone, a video camera, a video camera, a camera-equipped computer, a robot, and the like.
  • the collection method may specifically be shooting.
  • the image acquired by the terminal in real time may specifically be a real scene image (the real scene image refers to an image collected from the natural world).
  • S102 Determine a user image included in the image.
  • the user image may be a character image of the user, that is, when the terminal collects the real-life image in real time, the user is in the real scene, and the image collected by the terminal includes the user image. .
  • the employee image contained in the image is the user image.
  • the user image in the image is determined, that is, the character recognition processing is performed on the image.
  • different user identification methods may be used to determine the included user image from the image. This does not constitute a limitation on the present application.
  • the user corresponding to the user image can be further determined.
  • feature recognition such as facial recognition, gait recognition, and the like is performed on the user image to determine the identity of the user corresponding to the user image.
  • the user identifier may be an identifier for indicating the identity of the user (eg, the ID of the user, etc.).
  • the user identifier may be stored in the online system, and the user may enter the biometric feature into the online system in advance, so that the online system can be targeted for each user. Establish a correspondence between the user biometric and the user identifier. Then, after the biometric of the user is identified, the user ID of the user can be determined according to the correspondence (ie, the identity of the user is determined).
  • S104 Acquire user information corresponding to the determined user identifier according to the correspondence between the user identifiers and the user information that are saved in advance.
  • the user information includes, but is not limited to, a user's real name, an account name, a self-introduction, an evaluation of the user by the other user, and a user tag (the user tag may be a tag that reflects certain attributes of the user, such as: "Film Talent", “Gold Member”, etc.).
  • a correspondence between the user identifier and the user information of the user is established, so once the user identifier corresponding to the user image is determined, the user identifier can be further determined. Corresponding user information.
  • S105 Generate, according to the obtained user information, augmented reality AR graphics data corresponding to the user information, and feed back to the terminal, so that the terminal displays the image in real-time according to the received AR graphics data. Corresponding AR graphics.
  • the AR graphic follows the user image in real time.
  • the AR graphic corresponding to the user information may be a graphic such as a character or an icon displayed in a two-dimensional or three-dimensional manner, and is not specifically limited herein.
  • the user information will be processed through corresponding graphics to form an AR graphic of the user information.
  • the AR graphic of the corresponding user information is displayed on the user image in the real-life image.
  • the terminal is an AR helmet, as shown in FIG. 2a, which is a real-life image captured by the AR helmet, assuming that the live image contains a character graphic of the user Y (ie, a user image).
  • the corresponding online system can determine that the user corresponding to the user image is the user Y (the user tag of the user Y is a "movie"), then the online system will target the user "Movie Master" generates AR graphic data and sends it to the AR helmet.
  • the AR helmet displays the AR graphic of the user tag "Movie Master" on the character image of the user Y, as shown in FIG. 2b.
  • the characters in the actual environment will move at any time, or the AR device will move to a certain extent along with the user who uses the AR device (for example, the user who uses the AR device turns or walks, then the AR device The angle or position of the shot will change accordingly. These conditions may cause the position of the captured user image to also change. Based on this, the AR graphic in the embodiment of the present application will follow the user image in real time in order to be able to clearly indicate the association with the user image.
  • the real-life images observed by the user through the AR device are three-dimensional, and the relative movement between users is more complicated, such as a user holding an AR device.
  • the AR graphic of the user tag "movie master" will be viewed from the perspective of the observer. Display (ie, the AR graphic of the user tag is always directed towards the user using the AR helmet).
  • the position, orientation, and the like displayed by the AR graphic may be determined by the device parameters of the AR device (such as translation parameters, deflection parameters, focal length, etc.), which does not constitute the present application. limited.
  • the AR graphics data may be data that is converted into an easy-to-transfer data after the AR graphics are completed by the online system.
  • the terminal receives the AR graphics. After the data, the corresponding AR graphic can be directly displayed.
  • the AR graphics data may be data required by the online system to construct an AR graphics, that is, in this manner, after receiving the AR graphics data, the terminal will The AR graphics are built locally based on the AR graphics data at the terminal and then displayed.
  • the terminal can display different manners according to different devices, for example, when the AR device is an AR glasses or an AR helmet wearing device, the AR graphic is directly displayed on the lens or the helmet inner screen; When the AR device is a computer or robot with an AR function, it can be displayed on the corresponding screen, or projected display (including planar projection and holographic projection). This is not specifically limited.
  • the image receiving, recognizing, and generating AR graphic data may be implemented by a server (or a server cluster) in the background of the online system.
  • the server and the terminal maintain a network connection.
  • the terminal sends the image collected in real time to the server, so that the server will recognize the image of the person (ie, the user image) contained in the image, and further determine the identity of the user, and the user.
  • the user information is regenerated to generate AR graphic data corresponding to the user information, and the terminal returns to the terminal, so that the terminal can display the AR graphic corresponding to the user image in the image, and the AR graphic reflects the user information of the user.
  • the method in the embodiment of the present application does not require the user to access the corresponding interface to view the user information of other users, and at the same time, the AR user can associate the virtual user information with the actual user. In the whole process, the user can view the user information of other users without any operation. Obviously, this way effectively improves the convenience of viewing user information in the scene of user interaction, and establishes a virtual
  • the user information is substantially related to the actual user.
  • the content described in the embodiment of the present application may be based on the architecture shown in FIG. 3, specifically, in FIG. 3, including the cloud.
  • the user information system and the AR device the cloud user information system is configured to identify the user image and the user identity in the transmitted image of the AR device, and generate AR graphic data.
  • the AR device is used to collect images and display AR images corresponding to images and AR graphic data.
  • the determination of the user identity will affect the accuracy of the subsequent determination of the user information. If the identity of the user cannot be accurately determined, the subsequent generated AR graphics cannot reflect the real user information of the user, so In the application embodiment, different ways of identifying the user identity are provided.
  • the biometrics of different users are stored in the server, and based on this, the user identifier corresponding to the user image can be determined.
  • determining the user identifier corresponding to the user image specifically includes: extracting image features according to the determined user image, and determining and extracting the user features corresponding to each user identifier stored in advance The user feature that matches the image feature is used as the user identifier corresponding to the user image.
  • the image features are extracted for user images in the image.
  • the image features may be: facial features, iris features, physical features, and the like extracted from the user image.
  • Each user feature pre-stored includes: biometrics.
  • the biological features include, but are not limited to, facial features, fingerprint features, palm print features, retinal features, human contour features, gait features, voiceprint features, and the like.
  • the image captured by the terminal may be a frame image or a continuous multi-frame image (forming a video).
  • the extracted image features are different, and accordingly, Image recognition is required using different recognition methods.
  • a single frame image can be considered as a static picture.
  • the image features extracted for the characters in the picture may be static features such as facial features, Human contour features, fingerprint features, palm print features, iris features, and more.
  • different image features can be extracted from the user image. For example, if the face of the user image collected by the terminal is relatively clear, the facial feature can be extracted; for example, if the camera and fingerprint collector of the terminal are used. Or the palm print collector is associated, then the user's fingerprint or palm print feature can be extracted. As one mode in the embodiment of the present application, in practical applications, multiple image features can be extracted, thereby increasing the accuracy of the subsequent recognition process. Of course, this does not constitute a limitation on the present application.
  • the process of extracting the image features may adopt a corresponding feature extraction algorithm, such as a feature-based recognition algorithm, a template-based recognition algorithm, and the like, which are not specifically limited herein.
  • a corresponding feature extraction algorithm such as a feature-based recognition algorithm, a template-based recognition algorithm, and the like, which are not specifically limited herein.
  • a continuous multi-frame image can be considered as a coherent and dynamic video, in which case the image features extracted for the characters in the picture can be external features of the character (such as human contour features, facial features). In addition, some dynamic features, such as gait features, can be extracted.
  • facial features of different angles of the character can be determined from the multi-frame image, thereby constructing a facial model of the character.
  • the image that the receiving terminal collects and transmits in real time includes: a multi-frame image that is received and transmitted by the receiving terminal in real time.
  • the extracted user features that match the image features specifically include: extracting facial features from user images of each frame image, and constructing a three-dimensional facial model according to the extracted facial features, respectively corresponding to each user identifier stored in advance In the three-dimensional facial model, a three-dimensional facial model that matches the constructed three-dimensional facial model is determined.
  • the server constructs a three-dimensional facial model of the user in advance, that is, the server can perform multi-angle scanning of the face such as the front face and the side face on the face of the user, and then according to the face. Multi-angle scans to create a three-dimensional facial model.
  • the server can perform multi-angle scanning of the face such as the front face and the side face on the face of the user, and then according to the face. Multi-angle scans to create a three-dimensional facial model.
  • this does not constitute a limitation on the present application.
  • Such an approach can more accurately determine the identity of the user, that is, accurately determine the user identity of the user.
  • the user's gait characteristics can be used to identify the user's identity, that is, from continuous In the multi-frame image, the gait characteristics of the user are determined. in particular:
  • the image that the receiving terminal collects and transmits in real time includes: a multi-frame image that is received and transmitted by the receiving terminal in real time.
  • the user features corresponding to the extracted image features are determined in the user features respectively corresponding to the user identifiers, and the method includes: extracting the gait features according to the user images in each frame image, Among the gait features corresponding to the respective user identifiers stored in advance, the gait features matching the extracted gait features are determined.
  • some terminals have an audio collection function. If the user speaks or talks, the user's voice will be collected by the terminal. On this basis, the terminal will also respond to the corresponding The server transmits the collected voice data of the user.
  • the server can also perform acoustic processing such as filtering and noise reduction on the sound data sent by the terminal, thereby extracting the voiceprint features belonging to the user, and pre-storing the sound patterns.
  • acoustic processing such as filtering and noise reduction
  • the match is made to determine the user identifier corresponding to the voiceprint, that is, to identify the identity of the user.
  • the above content is a process of identifying the identity of the user, that is, determining the user identifier corresponding to the user image. After the user identification is determined, the corresponding user information can be obtained to subsequently generate the AR graphic data.
  • the user information in the embodiment of the present application may be edited by the user himself or may be edited by other users for the user.
  • pre-storing the correspondence between each user identifier and the user information includes: receiving the user information edited by the user for the user identifier of the user, and establishing the user identifier and the user according to the user information edited by the user for the user identifier of the user. Correspondence between information and save;
  • the user information edited by the first user to the second user is received, and the correspondence between the user identifier of the second user and the user information is established and saved according to the user information edited by the first user to the second user.
  • the user information may be a user label
  • the user label edited by the user X for its own account userX is “food gourmet”.
  • the user X is the user ID of the user Y
  • the edited user tag is “movie master”.
  • the user information edited by the user or the user information edited by other users for the user may have certain subjectivity (may not be inconsistent with the actual situation), then, in the scenario of interaction between users, This can be a misleading to other users.
  • the credibility is determined for the user information, and specifically, the correspondence between the user identifier and the user information is established and saved, and specifically includes:
  • the server determines the matching degree between the user information edited by the user and the historical data (ie, First credibility).
  • the historical data may be shared by the network as described above, browsing data, and the like, and may also be historical consumption data, chat data, and the like, which are not specifically limited herein.
  • the matching degree between the user information and the historical data may be performed by a plurality of different calculation methods, and only a relatively simple matching degree calculation method is used herein, which does not constitute a limitation on the present application: statistical user information is in historical data. The proportion of the match, based on the frequency to determine the degree of matching with historical data.
  • the user tag (belonging to a kind of user information) edited by the other user for the account userY of the user Y is “movie up”, and it is assumed that the account userY belongs to the microblog account, therefore,
  • the historical data of the userY of the account may be the microblog data shared in the history, and it is assumed that the proportion of the movie type involved in the history microblog data of the account userY is 95%, then the user label of the account userY may be "movie"
  • the match between the person and the historical data is determined to be 0.95.
  • the degree of matching between different user information and historical data ie, first confidence
  • the process of establishing a correspondence between the user identifier and the user information by the server stores the first credibility of the user information.
  • each user tag corresponding to the account userY saved by the server and the first credibility of each user tag.
  • the display mode and state of the AR graphic can be determined.
  • the foregoing content is that the server determines the first credibility corresponding to the user information according to the historical data corresponding to each user identifier stored therein, and the process can be regarded as a big data verification process.
  • the historical data is only the data generated by the user performing the corresponding network operation in the network environment, and does not absolutely represent the characteristics or attributes of the user.
  • some specified operations between users eg, like, evaluation, etc.
  • To determine another credibility of user information may be used.
  • the method further includes: determining, for each saved user information, another user performing a specified operation on the user information, and determining, for each other user, the designation performed by the other user according to the level of the other user.
  • the score generated by the user information is determined, and the second credibility of the user information is determined according to the score determined for each other user, and saved.
  • the specified operation includes, but is not limited to, a focus operation, a comment operation, and a like operation.
  • the level includes, but is not limited to, an account level, a credit level, and a level of attention (the level of attention may be determined according to the number of other users' attention, in other words, the more other users paying attention, the degree of attention The higher the level).
  • the specific value of the second credibility may be determined according to the product of the level of the user who issued the specified operation and the corresponding credibility coefficient (in one mode, the credible coefficient is positively correlated with the level of the user). This does not constitute a limitation on the present application.
  • the user tag "movie darling" of the account userY accepts the like operation of 1000 other users, then, it can be considered that the 1000 other users recognize the user tag. Obviously, this approach will also increase the credibility of the user tag.
  • the first credibility and the second credibility of the user information can be determined.
  • all the user information corresponding to the user identifier can be determined according to the determined two credibility.
  • obtaining the determined user identifier specifically includes: determining, according to the saved first, the first credibility and the second credibility of the user information, the total credibility of the user information, and obtaining the total credibility according to the total credibility User information not lower than the preset confidence threshold.
  • the total credibility of the user information may be determined by using a weight. That is:
  • R is the total credibility of the user information
  • r 1 is the first credibility of the user information
  • w 1 is the weight of the first credibility
  • r 2 is the second credibility of the user information
  • w 2 is the weight of the second credibility.
  • w 1 and w 2 can be adjusted according to the needs of the actual application, and the present invention is not limited thereto.
  • the user identifier may have more user information (for example, the user account corresponds to more user tags, dozens to dozens of user tags), and if these user information are obtained, Then, after the AR graphic data is generated subsequently, the user information is displayed in the corresponding AR device, so that the display effect is bound to be dense and complicated, affecting the user to observe the corresponding AR graphic, that is, It is not suitable to obtain all user information corresponding to the user ID.
  • each user information may be filtered according to the total credibility on the basis of determining the total credibility described above. For example, if the preset credibility threshold is 8, then all the user tags of the userY of the user Y will be obtained from the user tag with a total credibility of not less than 8, so that it is displayed on the subsequent AR device.
  • the user tags displayed on the user Y are user tags whose total reliability is not less than 8.
  • each user information corresponding to the user identifier may be different types of user information, and is used for the label as an example: “love to drink latte” and “finance talent”
  • Two user tags reflect the two characteristics of the user in different scenarios.
  • the user information corresponding to the user identifier is usually associated with the corresponding business scenario, as in the above example: the user label “love to drink latte” is more suitable for display in a beverage-related scene (eg, when the user is in a coffee shop) And the user tag "Financial Talent" is more suitable for display in financial related scenarios (eg when the user is in the bank). That is to say, in the process of obtaining the user information, the plurality of user information corresponding to the user identifier may also be filtered according to the corresponding service scenario.
  • the obtaining the user information corresponding to the determined user identifier further includes: acquiring the environment information, determining the service scenario according to the environment information, and acquiring the user information that matches the service scenario according to the service scenario.
  • the foregoing environmental information includes but is not limited to: network information, location information, geographic identification information, and the like.
  • the network information herein may be information such as an IP address and a network name corresponding to the network accessed by the AR device; the location information may be information of a location where the AR device is located; and the geographical location identification information may be, for example, a signboard.
  • RFID radio frequency identification
  • the actual places here can include: hotels, businesses, shopping malls, airports, etc.
  • the environment in which the user who uses the AR device is currently located can be determined, and further, the environment in which the user collected by the AR device is located can be determined, and thus, the corresponding environment can be obtained accordingly.
  • the employee M uses AR glasses, and the AR glasses collect real-time images in the coffee shop in real time. It is assumed that the AR glasses are connected to the corresponding server through a wireless network, and the server determines, according to an RFID-enabled geographical identification chip in the coffee shop, the actual location where the AR glasses are located: a coffee shop.
  • the user X enters the coffee shop, and the AR glasses collect the user image of the user X and send it to the server, so that the server determines all the user tags of the user X, because the server knows that the actual location where the AR device is located is The coffee shop, therefore, the server will filter out the user tags of the business scene related to the coffee shop from all the user tags of the user X, assuming that one of the user tags of the user X is: "love to drink latte". Therefore, the server will obtain the user tag "Love Draw Latte", and generate corresponding AR graphic data based on the user tag, and send it to the AR glasses.
  • the AR pattern of the user X and the user tag "love to drink latte" is displayed.
  • the employee M can intuitively know the user X's preference for coffee.
  • the server can determine the actual location where the AR glasses are located by the RFID geographical indication: the bank. Then, assuming that the user X enters the bank, the AR glasses used by the employee N will collect the user image of the user X, and determine the user of the financial related business scenario related to the bank from all the user tags corresponding to the user X. The label, assuming that one of the user tags of the user X is "Financial Talent", the server will acquire the user tag and generate corresponding AR graphic data based on the user tag. At this time, as shown in FIG.
  • the employee N can intuitively know the characteristics of the user X for the financial service, and subsequently recommend the corresponding wealth management product for the user X.
  • the server can determine different service scenarios according to the location of the AR device or the actual location (ie, different environmental information), so that the user corresponding to the user image collected by the AR device is obtained. In the information, the user information that meets the business scenario is obtained. Obviously, this way can effectively improve the AR device when displaying user information AR graphics. Intelligent.
  • the foregoing two ways of obtaining the user information corresponding to the user identifier may be used alone or separately.
  • user information matching the environmental information and having a total credibility not lower than a preset credibility threshold may be acquired in actual application. Obviously, this allows for more accurate user information. Of course, this does not constitute a limitation on the present application.
  • the corresponding user information can be edited for other users, and the user information edited by other users for the user can also be accepted. That is, the first user edits the corresponding user information for the second user, and then, in the scenario, the method further includes: determining a user identifier corresponding to the first user, and recording is edited by the first user The edit relationship between the user information and the user ID of the first user is saved.
  • the server records which user information is "who" edited. Based on this, if the image is collected and sent by the terminal of the first user in real time, the user information corresponding to the determined user identifier is obtained, which includes: determining each user information corresponding to the user identifier of the second user, where The user information corresponding to the user identifier of the second user is obtained from the user information corresponding to the user identifier of the second user.
  • the corresponding user tags are edited for different customers. Assume that the user tags edited for User Y are "Love Latte” and “Coffee is not heated” and the editing relationship will be saved in the server.
  • the AR glasses collect the user image of the user Y, send the collected image to the server through a network connection with the server, and the server determines that the user image is the user Y, and according to the edit relationship that has been saved. , get the user label: "Love to drink latte" and "coffee does not heat”.
  • the server may adjust the display effect of the AR graphic according to the total credibility of the user information, for example, if the credibility of the user information is high, then the user information may be The text size setting is large, and similarly, the brightness, color, and the like of the user information can also be set.
  • the generating the AR graphics data corresponding to the user information specifically includes: determining a display state parameter of the user information according to the total credibility of the user information corresponding to the user identifier, and generating an inclusion according to the display state parameter.
  • the AR graphic data of the user information and its display status parameters are examples of the user information.
  • the display state parameter includes at least one of a color parameter, a brightness parameter, and a size parameter.
  • the information display method of the server side provided by the embodiment of the present application.
  • the information display method is further provided for the terminal side, and the method specifically includes the following steps:
  • Step 1 Collect images in real time and send them to the server.
  • the user image is included in the image, so that the server determines the user image included in the image, and determines the user identifier corresponding to the user image, according to the corresponding relationship between the user identifier and the user information saved in advance.
  • the user information corresponding to the determined user identifier is obtained, and the AR graphics data corresponding to the user information is generated according to the obtained user information, and is fed back to the terminal.
  • the terminal may be a device with an image collection function, as described above, and details are not described herein again.
  • Step 2 Receive AR graphic data corresponding to the user information fed back by the server.
  • Step 3 Display, according to the AR image data, a corresponding AR graphic in an image acquired in real time, wherein the AR graphic follows the user image in real time.
  • the embodiment of the present application further provides an information display device, as shown in FIG. 6 .
  • the information display device in FIG. 6 includes:
  • the receiving module 601 is configured to receive an image that is collected and sent by the terminal in real time;
  • a user image module 602 configured to determine a user image included in the image
  • a user identifier module 603, configured to determine a user identifier corresponding to the user image
  • the user information module 604 is configured to obtain user information corresponding to the determined user identifier according to the pre-stored correspondence between each user identifier and the user information.
  • the AR module 605 is configured to generate, according to the obtained user information, the augmented reality AR graphics data corresponding to the user information, and feed back to the terminal, so that the terminal collects the real-time data according to the received AR graphics data.
  • the corresponding AR graphic is displayed in the image, where The AR graphic follows the user image in real time.
  • the terminal includes an AR device.
  • the AR device includes at least: AR glasses, an AR helmet, a mobile phone with an AR function, a computer with an AR function, and a robot with an AR function.
  • the user image module 602 is specifically configured to perform character recognition on the image, and extract a user image included in the image.
  • the user identification module 603 is specifically configured to: extract image features according to the determined user image, and determine user features that match the extracted image features among user features respectively corresponding to each user identifier stored in advance. And determining the user identifier corresponding to the determined user feature as the user identifier corresponding to the user image.
  • Each user feature pre-stored includes: a biometric feature including at least one of a facial feature, a fingerprint feature, a palm print feature, a retina feature, a human body contour feature, a gait feature, and a voiceprint feature.
  • the receiving module 601 is specifically configured to receive, by the terminal, a multi-frame image that is collected and transmitted in real time;
  • the user identification module 603 is specifically configured to separately extract facial features from user images of each frame image, and construct a three-dimensional facial model according to the extracted facial features, in a three-dimensional facial model corresponding to each user identifier stored in advance, A three-dimensional facial model that matches the constructed three-dimensional facial model is determined.
  • the receiving module 601 is specifically configured to receive, by the terminal, a multi-frame image that is collected and sent in real time;
  • the user identification module 603 is specifically configured to extract a gait feature according to the user image in each frame image, and determine a step that matches the extracted gait feature in the gait feature corresponding to each user identifier stored in advance. State features.
  • the user information module 604 is specifically configured to receive the user information edited by the user for the user identifier of the user, and establish a correspondence between the user identifier and the user information according to the user information edited by the user for the user identifier of the user. Save; or
  • the user information edited by the first user to the second user is received, and the correspondence between the user identifier of the second user and the user information is established and saved according to the user information edited by the first user.
  • the user information module 604 is specifically configured to obtain the historical data corresponding to the user identifier, and determine, according to the historical data, the matching degree between the user information and the historical data to be established as the user information of the correspondence to be established.
  • the first credibility is to save the first credibility of the user information to be established, and establish a correspondence between the user information to be associated with the user identifier to be established.
  • the device further includes: a mutual authentication module 606, configured to determine, for each saved user information, another user that performs a specified operation on the user information, and for each other user, determine the other user according to the level of the other user.
  • the specified operation performed on the user information, and the second credibility of the user information is determined according to the score determined for each other user, and saved;
  • the specified operation includes: at least one of a focus operation, a comment operation, and a click operation;
  • the level includes at least one of an account level, a credit rating, and a rating level.
  • the user information module 604 is specifically configured to determine a total credibility of the user information according to the first credibility and the second credibility of the saved user information, and obtain the total credibility according to the total credibility. User information with a reliability not lower than the preset credibility threshold.
  • the user information module 604 is further configured to acquire environment information, determine a service scenario according to the environment information, and acquire a user message that matches the service scenario according to the service scenario. interest;
  • the environmental information includes at least one of network information, location information, and geographic identification information.
  • the device further includes: an edit relationship record module 607, configured to determine a user identifier corresponding to the first user, and record an edit relationship between the user information edited by the first user and the user identifier of the first user, And save.
  • an edit relationship record module 607 configured to determine a user identifier corresponding to the first user, and record an edit relationship between the user information edited by the first user and the user identifier of the first user, And save.
  • the user information module 604 is specifically configured to determine each user information corresponding to the user identifier of the second user, where the user identifier of the second user is Among the corresponding user information, user information having an editing relationship with the user identifier of the first user is acquired.
  • the AR module 605 is specifically configured to determine a display state parameter of the user information according to the total credibility of the user information corresponding to the user identifier, and generate the user information and the display state parameter according to the display state parameter.
  • the embodiment of the present application further provides an information display device, as shown in FIG. 7.
  • the device includes:
  • the collecting module 701 is configured to collect an image in real time and send it to a server, where the image includes a user image, so that the server determines a user image included in the image, and determines a user identifier corresponding to the user image. Acquiring the user information corresponding to the determined user identifier according to the pre-stored correspondence between the user identifier and the user information, and generating the AR graphics data corresponding to the user information according to the obtained user information, and feeding the data to the terminal;
  • the receiving module 702 is configured to receive AR graphics data corresponding to the user information fed back by the server;
  • the display module 703 is configured to display, according to the AR image data, a corresponding AR graphic in an image acquired in real time, wherein the AR graphic follows the user image in real time.
  • an information display system is also provided, as shown in FIG. 8.
  • the system includes: an information display device 80 and a terminal 81, wherein
  • the terminal 81 is configured to collect an image in real time and send it to the information display device 80, and according to the AR image data corresponding to the user image included in the image fed back by the information display device 80, in the image collected in real time.
  • a corresponding AR graphic is displayed, wherein the AR graphic follows the user image in real time.
  • the information display device 80 includes:
  • the AR intelligent module 801 is configured to receive an image acquired by the terminal in real time, and generate AR graphic data corresponding to the user information included in the image.
  • the identification verification module 802 is configured to determine, according to the received image, a user image included in the image, and determine a user identifier corresponding to the user image.
  • the label management module 803 is configured to obtain the user information corresponding to the determined user identifier according to the correspondence between the user identifiers and the user information that are saved in advance.
  • the big data risk control module 804 is configured to obtain historical data corresponding to the user identifier, and determine, according to the historical data, a matching degree between the user information and the historical data of the correspondence to be established, as the user information of the correspondence to be established.
  • the mutual authentication module 805 is configured to determine, for each saved user information, another user that performs a specified operation on the user information, and for each other user, determine, according to the level of the other user, a specified operation performed by the other user.
  • User information generated scores based on each The score determined by his user determines the second credibility of the user information and saves it.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.

Abstract

本申请公开了一种信息展示方法、装置及系统,该方法包括:接收终端实时采集并发送的图像,确定所述图像中包含的用户图像,确定所述用户图像对应的用户标识,根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息,根据获取的所述用户信息,生成所述用户信息对应的增强现实AR图形数据,并反馈给终端,以使得所述终端根据接收到的所述AR图形数据,在实时采集的图像中显示相应的AR图形,所述AR图形实时跟随所述用户图像。通过本方法,用户无需做出任何操作,便可以查看到其他用户的用户信息,有效提升了在用户交互的场景下查看用户信息的便捷性,且建立了虚拟的用户信息与实际的用户个人之间实质性的关联。

Description

一种信息展示方法、装置及系统 技术领域
本申请涉及计算机技术领域,尤其涉及一种信息展示方法、装置及系统。
背景技术
在互联网络和智能技术的支持下,在线系统(如:网站)从传统的面向用户提供单一服务的方式,转变为综合性的网络平台,用户通过在线系统,不仅可以与在线系统后台的服务器进行交互,还可以与同样使用该在线系统的其他用户实现业务交互、信息分享等操作。
现有技术中,在用户之间基于在线系统进行交互的场景下,使用在线系统的任一用户(以下简称为:第一用户)均可查看其他用户(以下简称为:第二用户)的用户信息,例如:账户名、不同用户对该第二用户的评价、自我介绍、用户标签等信息,从而,使得第一用户查找到所需的第二用户,以进行诸如信息分享、业务获取、关注等交互操作。
但是,对于现有技术中的方式而言,仍存在以下缺陷:
其一、若第一用户想要获知第二用户的用户信息,只能使用终端访问至相应的页面(如:第二用户的个人主页)中查看,显然,这样的方式较为繁琐。
其二,上述方式中所查看的第二用户的用户信息,也只是第二用户在在线系统中注册的信息,换言之,第一用户所查看到的用户信息是一种虚拟的网络信息,显然,通过这种虚拟的网络信息,并不能在实际环境中确定出实际的第二用户。考虑到目前线下业务网络化的趋势,若用户能够通 过查看其他用户的用户信息与实际个人进行对应,则能够增加用户之间的交互性。但现有技术中的方式难以实现虚拟信息和实际个人的对应。
发明内容
本申请实施例提供一种信息展示方法、设备、装置及系统,用以解决在用户之间交互的场景下的问题。
本申请实施例提供的一种信息展示方法,包括:
接收终端实时采集并发送的图像;
确定所述图像中包含的用户图像;
确定所述用户图像对应的用户标识;
根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息;
根据获取的所述用户信息,生成所述用户信息对应的增强现实AR图形数据,并反馈给终端,以使得所述终端根据接收到的所述AR图形数据,在实时采集的图像中显示相应的AR图形,其中,所述AR图形实时跟随所述用户图像。
本申请实施例还提供的一种信息展示方法,包括:
实时采集图像并发送至服务器,其中,所述图像中包含用户图像;以使得所述服务器确定出所述图像中包含的用户图像,确定所述用户图像对应的用户标识,根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息,根据获取的所述用户信息,生成所述用户信息对应的AR图形数据,并反馈给终端;
接收服务器反馈的所述用户信息对应的AR图形数据;
根据所述AR图像数据,在实时采集的图像中显示相应的AR图形,其中,所述AR图形实时跟随所述用户图像。
本申请实施例还提供的一种信息展示装置,包括:
接收模块,用于接收终端实时采集并发送的图像;
用户图像模块,用于确定所述图像中包含的用户图像;
用户标识模块,用于确定所述用户图像对应的用户标识;
用户信息模块,用于根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息;
AR模块,用于根据获取的所述用户信息,生成所述用户信息对应的增强现实AR图形数据,并反馈给终端,以使得所述终端根据接收到的所述AR图形数据,在实时采集的图像中显示相应的AR图形,其中,所述AR图形实时跟随所述用户图像。
本申请实施例还提供的一种信息展示装置,包括:
采集模块,用于实时采集图像并发送至服务器,其中,所述图像中包含用户图像;以使得所述服务器确定出所述图像中包含的用户图像,确定所述用户图像对应的用户标识,根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息,根据获取的所述用户信息,生成所述用户信息对应的AR图形数据,并反馈给终端;
接收模块,用于接收服务器反馈的所述用户信息对应的AR图形数据;
显示模块,用于根据所述AR图像数据,在实时采集的图像中显示相应的AR图形,其中,所述AR图形实时跟随所述用户图像。
本申请实施例还提供的一种信息展示系统,包括:
终端,用于实时采集图像并发送至信息展示装置,根据接收到的所述 信息展示装置反馈的与所述图像中包含的用户图像所对应的AR图形数据,在实时采集的图像中显示相应的AR图形,其中,所述AR图形实时跟随所述用户图像;
信息展示装置,包括:
AR智能模块,用于接收终端实时采集的图像,以及,生成所述图像中所包含的用户信息所对应的AR图形数据;
识别校验模块,用于针对接收到的图像,确定出所述图像中包含的用户图像,并确定所述用户图像对应的用户标识;
标签管理模块,用于根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息,
大数据风险控制模块,用于获取所述用户标识对应的历史数据,根据所述历史数据,确定待建立对应关系的用户信息与历史数据的匹配度,作为待建立对应关系的用户信息的第一可信度,保存待建立对应关系的用户信息的第一可信度,并建立待建立对应关系的用户信息与待建立对应关系的用户标识的对应关系;
互证模块,用于针对保存的每个用户信息,确定对该用户信息执行指定操作的其他用户,针对每个其他用户,根据该其他用户的等级,确定该其他用户执行的指定操作对该用户信息产生的分值,根据针对每个其他用户确定出的分值,确定该用户信息的第二可信度,并保存。
本申请实施例提供一种信息展示方法、装置及系统,在该方法中,终端会将实时采集到的图像发送至服务器,这样一来,服务器将识别出该图像中所包含的人物图像(即,用户图像),并进一步确定出用户的身份,以及该用户的用户信息,再生成用户信息所对应的AR图形数据,返回至终端,从而,终端便可以在图像中显示出与用户图像相对应的AR图形, 而AR图形反映了该用户的用户信息。与现有技术不同的是,本申请实施例中的方式无需用户访问至相应的界面中以查看其他用户的用户信息,同时,使用AR图形的方式能够将虚拟的用户信息与实际的用户个人关联起来,整个过程中,用户无需做出任何操作,便可以查看到其他用户的用户信息,显然,这样的方式有效提升了在用户交互的场景下,查看用户信息的便捷性,并且,建立了虚拟的用户信息与实际的用户个人之间实质性的关联。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1为本申请实施例提供的信息展示过程;
图2a为本申请实施例提供的未显示AR图形的AR头盔屏幕界面的示意图;
图2b~2d为本申请实施例提供的显示有AR图形的AR头盔屏幕界面的示意图;
图3为本申请实施例提供的云端用户信息系统和终端的架构示意图;
图4a和4b分别为本申请实施例提供的在不同的交互场景下的AR眼镜界面的示意图;
图5为本申请实施例提供的另一种场景下的AR眼镜界面的示意图;
图6为本申请实施例提供的基于服务器侧的信息展示装置结构示意图;
图7为本申请实施例提供的基于终端侧的信息展示装置结构示意图;
图8为本申请实施例提供的基于信息展示系统结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
正如前述,在用户之间进行交互的场景中,若第一用户能够获知第二用户的用户信息(如:第二用户的用户标签、信用度、评价等信息),则可以通过用户信息初步的了解第二用户,以便于后续进行用户之间的交互。但此时,第一用户只能通过终端访问至相应的界面中以查看第二用户的用户信息,并且,如果第一用户想要与第二用户进行线下的交互,由于用户信息只是一种虚拟信息,故第一用户并不能通过第二用户的用户信息,确定出第二用户个人。
这就需要一种能够便捷展示用户信息,且展示出的用户信息能够体现出与用户个人之间的对应关系的信息展示方法。基于此,本申请实施例提供一种信息展示方法,如图1所示。
图1为本申请实施例提供的信息展示过程,该过程具体包括以下步骤:
S101:接收终端实时采集并发送的图像。
在本申请实施例中,所述终端,包括AR设备(本申请中的AR设备均具有图像采集功能),其中,AR设备包括但不限于:AR眼镜、AR头盔、具有AR功能的手机、具有AR功能的计算机、具有AR功能的机器人(在本申请中是指具有采集、显示功能的实体机器人)。
当然,在本申请的一些实施例中,针对于采集过程而言,终端可以是具有图像采集功能的设备,用于实时采集图像,诸如:手机、摄像机、摄影机、配备摄像头的计算机、机器人等。采集方式具体可以是拍摄。
由终端实时采集的图像,具体可以是实景图像(实景图像是指从自然世界中采集到的图像)。
S102:确定所述图像中包含的用户图像。
在本申请实施例中,所述的用户图像可以是用户的人物图像,也就是说,当终端实时采集实景图像的过程中,用户处于实景中,从而终端所采集到的图像中就包含用户图像。
例如:假设由终端所采集到的图像是办公室内的实景图像,那么,图像中包含的员工图像,就是用户图像。
确定图像中用户图像,也就是对图像进行人物识别处理,当然,在本申请实施例中,可采用不同的人物识别方法从图像中确定出包含的用户图像。这里并不构成对本申请的限定。
S103:确定所述用户图像对应的用户标识。
在确定出图像中的用户图像之后,便可以进一步确定出该用户图像所对应的用户。换言之,针对用户图像进行特征识别(如:面部识别、步态识别等生物特征识别),以确定出用户图像所对应的用户的身份。
基于此,所述的用户标识,可以是一种用于标示用户身份的身份标识(如:用户的ID等)。在本申请实施例中的一种方式下,用户标识可存储于在线系统内,并且,用户会预先将自身的生物特征录入至在线系统中,这样一来,在线系统便可以针对每一用户,建立该用户生物特征与用户标识之间的对应关系。那么,当识别出用户的生物特征之后,便可以根据对应关系来确定用户的用户标识(即,确定出用户的身份)。
S104:根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息。
所述的用户信息,包括但不限于:用户的真实姓名、账户名、自我介绍、其他用户对该用户的评价、用户标签(用户标签可以是一种能够反映用户某些属性的标签,如:“电影达人”、“金牌会员”等)等。
在在线系统中,针对每一用户,均会建立用户标识与该用户的用户信息之间的对应关系,故一旦确定出用户图像所对应的用户标识,也就可以进一步地确定给出该用户标识所对应的用户信息。
S105:根据获取的所述用户信息,生成所述用户信息对应的增强现实AR图形数据,并反馈给终端,以使得所述终端根据接收到的所述AR图形数据,在实时采集的图像中显示相应的AR图形。
其中,所述AR图形实时跟随所述用户图像。
用户信息所对应的AR图形,可以是以二维或三维方式显示的文字、图标等图形,这里并不作具体限定。换言之,在本申请实施例中,用户信息将经过相应的图形处理,以形成用户信息的AR图形。这样一来,对于终端而言,除了显示所采集到的实景图像之外,还会在实景图像中的用户图像上显示出相应的用户信息的AR图形。
例如:假设终端是AR头盔,如图2a所示,是该AR头盔所采集到的实景图像,假设该实景图像中包含用户Y的人物图形(即,用户图像)。在使用了上述的方法后,相应的在线系统可确定出该用户图像所对应的用户是用户Y(该用户Y所具有的用户标签是“电影达人”),那么,在线系统会针对用户标签“电影达人”生成AR图形数据,发送给AR头盔。从而,AR头盔便会在用户Y的人物图像上显示出用户标签“电影达人”的AR图形,即如图2b所示。
考虑到在实际应用场景下,实际环境下的人物会随时移动,或者,AR设备会随着使用AR设备的用户发生一定程度的移动(如:使用AR设备的用户扭头或走动,那么,AR设备的拍摄角度或位置就会发生相应的变化),这些情况都有可能造成所采集的用户图像的位置也发生变化。基于此,本申请实施例中的AR图形将实时跟随用户图像,以便于能够明确地标示出与用户图像之间的关联。
延续上例,假设针对图2b,用户Y向画面的左侧移动,那么,该用户Y的用户标签“电影达人”的AR图形将跟随该用户Y的移动而移动,即如图2c所示。
上述示例中,仅以二维视图进行说明,在实际应用中,用户通过AR设备所观测到的实景图像均是三维的,用户之间的相对移动也更加复杂,如:持有AR设备的用户移动至被采集图像的用户的侧方,此时,AR图形将始终以观测者的视角进行显示。例如:延续上例,如图2d所示,在图2d中,使用AR头盔的用户移动至用户Y的左前方,那么,用户标签“电影达人”的AR图形就将以观测者的视角进行显示(即,用户标签的AR图形始终朝向使用AR头盔的用户)。当然,在本申请实施例中,AR图形所显示的位置、朝向等,具体可以由AR设备自身的设备参数(如:平移参数、偏转参数、焦距等)进行确定,这里并不构成对本申请的限定。
当然,作为本申请实施例中的一种方式,AR图形数据可以是由在线系统构建完成了AR图形后,将该AR图形转换成的便于传输的数据,在该方式下,终端接收到AR图形数据后,可直接显示出相应的AR图形。
作为本申请实施例中的另一种方式,AR图形数据可以是由在线系统生成的、构建AR图形所需的数据,也就是说,在该方式下,终端接收到该AR图形数据后,将在终端本地根据AR图形数据构建AR图形,再进行显示。
上述的两种方式并不构成对本申请的限定。
此外,终端在显示AR图形时,可根据设备的不同进行不同方式的显示,如:当AR设备为AR眼镜或AR头盔的穿戴设备时,将直接在眼镜片或头盔内屏上显示AR图形;而当AR设备为具有AR功能的计算机或机器人时,可以在相应的屏幕上显示、或者投影显示(包括平面投影和全息投影)。这里不作具体限定。
需要说明的是,针对以上内容,可以由在线系统后台的服务器(或服务器集群)实现上述图像接收、识别、生成AR图形数据等操作,应用时,服务器与终端保持网络连接。
通过上述步骤,终端会将实时采集到的图像发送至服务器,这样一来,服务器将识别出该图像中所包含的人物图像(即,用户图像),并进一步确定出用户的身份,以及该用户的用户信息,再生成用户信息所对应的AR图形数据,返回至终端,从而,终端便可以在图像中显示出与用户图像相对应的AR图形,而AR图形反映了该用户的用户信息。与现有技术不同的是,本申请实施例中的方式无需用户访问至相应的界面中以查看其他用户的用户信息,同时,使用AR图形的方式能够将虚拟的用户信息与实际的用户个人关联起来,整个过程中,用户无需做出任何操作,便可以查看到其他用户的用户信息,显然,这样的方式有效提升了在用户交互的场景下,查看用户信息的便捷性,并且,建立了虚拟的用户信息与实际的用户个人之间实质性的关联。
需要说明的是,本申请实施例中所描述的内容(包括上述如图1所示的方法和后续内容),可基于如图3所示的架构,具体而言,在图3中,包含云端用户信息系统和AR设备,云端用户信息系统用于识别AR设备的所发送的图像中的用户图像、及用户身份,并生成AR图形数据。AR设备用于采集图像,并显示图像、及AR图形数据所对应的AR图形。
用户身份的确定,将影响后续对用户信息确定的准确性,如果不能够准确地确定出用户的身份,那么,后续所生成的AR图形就不能够反映出该用户真实的用户信息,故在本申请实施例中,提供了不同的用户身份的识别方式。
具体来说,在实际应用场景下,服务器中会存储不同用户的生物特征,基于此,便可以确定出用户图像所对应的用户标识。具体而言,确定所述用户图像对应的用户标识,具体包括:根据确定出的所述用户图像,提取图像特征,在预先存储的各用户标识分别对应的用户特征中,确定出与提取出的图像特征相匹配的用户特征,将确定出的用户特征对应的用户标识作为所述用户图像对应的用户标识。
所述的图像特征,是针对图像中的用户图像所提取出来的。在本申请实施例中,图像特征可以如:从用户图像中所提取的面部特征、虹膜特征、形体特征等。
预先存储的各用户特征包括:生物特征。在本申请实施例中,生物特征包括但不限于:面部特征、指纹特征、掌纹特征、视网膜特征、人体轮廓特征、步态特征、声纹特征等等。
考虑到实际应用中,终端采集的图像可以是一帧图像、也可以是连续的多帧图像(形成视频),在这两种情况下,所提取出的图像特征各有不同,相应地,就需要使用不同的识别方式对图像特征进行识别。
具体而言:
一、当终端所采集的图像为单帧图像的情况
单帧图像可认为是一张静态的画面,在这样的情况下,针对该画面中的人物所提取出的图像特征,可以是一些静态的特征,诸如:面部特征、 人体轮廓特征、指纹特征、掌纹特征、虹膜特征等等。
在不同的应用场景中,可以从用户图像中提取出不同的图像特征,例如:若终端所采集的用户图像的面部较为清晰,则可以提取面部特征;又例如:若终端的摄像头与指纹采集器或掌纹采集器相关联,那么,则可以提取用户的指纹或掌纹特征。作为本申请实施例中的一种方式,在实际应用中,可以提取出多种图像特征,从而可以增加后续识别过程的准确性。当然,这里并不构成对本申请的限定。
此外,提取图像特征的过程可以采用相应的特征提取算法,诸如:人物特征点识别算法(Feature-based recognition algorithms)、模板识别算法(Template-based recognition algorithms)等等,这里不作具体限定。在提取出了人物的图像特征后,便可以根据预先保存的生物特征对图像特征进行识别。
二、当终端所采集的图像为连续的多帧图像的情况
连续的多帧图像可认为是连贯且动态的视频,在这样的情况下,针对该画面中的人物所提取出的图像特征,除了可以是人物的外部特征(诸如:人体轮廓特征、面部特征)之外,还可以提取出一些动态的特征,如:步态特征。
作为本申请实施例中的一种方式,在终端所采集的图像足够清晰的前提下,从多帧图像中可以确定出人物不同角度的面部特征,从而构建出人物的面部模型。
具体而言,当所述生物特征包括面部特征时,接收终端实时采集并发送的图像,具体包括:接收终端实时采集并发送的多帧图像。
基于此,在预先存储的各用户标识分别对应的用户特征中,确定出与 提取出的图像特征相匹配的用户特征,具体包括:从各帧图像的用户图像中分别提取出面部特征,根据提取出的各面部特征,构建三维面部模型,在预先存储的各用户标识分别对应的三维面部模型中,确定与构建的三维面部模型相匹配的三维面部模型。
需要说明的是,针对每一用户,服务器都预先构建了该用户的三维面部模型,也即,服务器可针对用户的面部进行正脸、侧脸等脸部的多角度扫描,再根据脸部的多角度扫描图,构建出三维面部模型。当然,这里并不构成对本申请的限定。
这样的方式能够较为准确地确定出用户的身份,即,准确地确定出用户的用户标识。
作为本申请实施例中的另一种方式,考虑到不同的用户行走的姿势、发力习惯各不相同,那么,便可以通过用户的步态特征来识别用户的身份,也即,可以从连续的多帧图像中,确定出用户的步态特征。具体而言:
当所述生物特征包括步态特征时,接收终端实时采集并发送的图像,具体包括:接收终端实时采集并发送的多帧图像。
基于此,在预先存储的各用户标识分别对应的用户特征中,确定出与提取出的图像特征相匹配的用户特征,具体包括:根据各帧图像中的用户图像,提取出步态特征,在预先存储的各用户标识分别对应的步态特征中,确定与提取出的步态特征相匹配的步态特征。
作为本申请实施例中的还一种方式,某些终端具有音频采集功能,如果用户说话或交谈,那么,用户的声音将会被该终端所采集,在此基础上,终端还会向相应的服务器发送所采集到的用户的声音数据。
因此,服务器还可以针对终端所发送的声音数据,进行滤波、降噪等声学处理,进而从中提取出属于用户的声纹特征,并与预先保存的声纹特 征进行匹配,以便确定出该声纹所对应的用户标识,即,识别出用户的身份。
当然,在实际识别用户身份的过程中,可以结合使用上述的各种方式,以便准确地确定出用户的身份。
以上内容是识别用户身份的过程,也即,确定出用户图像所对应的用户标识。在确定了用户标识之后,便可以获取相应的用户信息,以便后续生成该AR图形数据。
本申请实施例中的用户信息,可以是用户自己编辑的,也可以是其他用户针对该用户所编辑的。
所以,预先保存各用户标识与用户信息的对应关系,具体包括:接收用户针对自身的用户标识所编辑的用户信息,根据该用户针对自身的用户标识所编辑的用户信息,建立该用户标识与用户信息之间的对应关系并保存;
或,接收第一用户对第二用户所编辑的用户信息,根据该第一用户对第二用户所编辑的用户信息,建立第二用户的用户标识与该用户信息之间的对应关系并保存。
例如:用户信息可以是用户标签,用户X针对自身的账号userX编辑的用户标签为“美食达人”。又例如:用户X针对用户Y的账号userY,编辑的用户标签为“电影达人”。
无论是用户自己编辑的用户信息,还是其他用户针对该用户所编辑的用户信息,均有可能带有一定的主观性(可能与实际情况不符),那么,在用户之间进行交互的场景下,这会对其他用户造成一定程度的误导。
延续上例:假设用户X自身的账号userX所对应的历史数据(如:微 博、博客等网络分享数据、浏览记录数据等等)中,并未有与食品相关的信息,也就是说,用户X针对自身账号userX所编辑的用户标签“美食达人”的可信度较低,那么,在与其他用户进行交互的场景下,该用户标签就会使得其他用户认为该用户X了解食品相关的信息,这就有可能对其他用户造成误导。
基于此,在本申请实施例中,将针对用户信息确定其可信度,具体而言,建立用户标识与用户信息之间的对应关系并保存,具体包括:
获取所述用户标识对应的历史数据,根据所述历史数据,确定待建立对应关系的用户信息与历史数据的匹配度,作为待建立对应关系的用户信息的第一可信度,保存待建立对应关系的用户信息的第一可信度,并建立待建立对应关系的用户信息与待建立对应关系的用户标识的对应关系。
可见,无论是用户针对自身的用户标识所编辑的用户信息,还是其他用户针对该用户的用户标识所编辑的用户信息,服务器均会确定用户所编辑的用户信息与历史数据的匹配度(即,第一可信度)。
所述的历史数据,可以如前述的网络分享数据、浏览记录数据等,还可以是历史消费数据、聊天数据等等,这里不作具体限定。
此外,用户信息与历史数据的匹配度,可以由多种不同的计算方法,这里仅以一种较为简易的匹配度计算方法进行说明,并不构成对本申请的限定:统计用户信息在历史数据中的占比,根据该频率确定与历史数据的匹配度。例如:针对上例中的用户Y,其他用户为该用户Y的账号userY所编辑的用户标签(属于一种用户信息)为“电影达人”,并假设该账号userY属于微博账号,因此,该账户userY的历史数据可以是历史上所分享的微博数据,并假设,该账号userY的历史微博数据中涉及电影类型的占比为95%,那么,可以将账号userY的用户标签“电影达人”与历史数据的匹配度确定为0.95。
采用相似的方式,针对任一用户标识,均可以确定出不同的用户信息与历史数据之间的匹配度(即,第一可信度)。服务器在建立用户标识与用户信息之间的对应关系的过程,就会保存用户信息的第一可信度。
例如:如下表1所示,为服务器所保存的账号userY所对应的各用户标签,以及各用户标签各自的第一可信度。
用户标签 第一可信度
电影达人 0.95
美食家 0.17
表1
在服务器保存了用户信息的第一可信度之后,在后续生成AR图形数据的过程中,便可以决定AR图形的显示方式和状态。
上述内容是服务器根据其中存储的各用户标识所对应的历史数据的方式,来确定出用户信息对应的第一可信度,该过程可看作是一种大数据的校验过程。
但在实际应用中,用户的某些特征并不能够从历史数据中充分的反映出来,例如:前述示例中,虽然用户X针对自身的账号userX所编辑的用户标签为“美食达人”与历史数据不匹配,但并不能表示该用户X不了解食品相关知识,换言之,历史数据只是用户在网络环境中进行了相应的网络操作而产生的数据,并不绝对表示用户的特点或属性。
正是基于此考虑,故在本申请实施例中除了上述基于历史数据所确定出的第一可信度之外,还可以基于用户之间的某些指定操作(如:点赞、评价等),确定用户信息的另一种可信度。
具体而言,所述方法还包括:针对保存的每个用户信息,确定对该用户信息执行指定操作的其他用户,针对每个其他用户,根据该其他用户的等级,确定该其他用户执行的指定操作对该用户信息产生的分值,根据针对每个其他用户确定出的分值,确定该用户信息的第二可信度,并保存。
其中,所述指定操作,包括但不限于:关注操作、评论操作、点赞操作。
所述等级,包括但不限于:账户等级、信用等级、关注度等级(关注度等级可以是根据被其他用户所关注的数量来确定,换言之,进行关注的其他用户的数量越多,则关注度等级越高)。
通过上述内容可知,针对某用户信息而言,若该用户信息受到了等级较高的用户进行了诸如点赞、评论等操作,由于等级较高的用户自身的可信度较高,那么,就表明该用户信息较为可信。在实际应用中,第二可信度的具体取值,可以根据发出指定操作的用户的等级与相应的可信系数(一种方式下,可信系数与用户的等级正相关)的乘积确定,这里并不构成对本申请的限定。
当然,作为本申请实施例中的一种方式,针对某一用户信息,还可以根据对该用户信息发出指定操作的其他用户的数量,来确定该用户信息是否可信。例如:前述示例中,账号userY的用户标签“电影达人”,接受了1000名其他用户的点赞操作,那么,可以认为,这1000名其他用户认可该用户标签。显然,这样的方式也会增加用户标签的可信度。
通过以上内容,可以确定出的用户信息的第一可信度和第二可信度,在实际应用中,便可以根据确定出的两种可信度,从与用户标识所对应的所有用户信息中,获取相应的用户信息。具体而言,获取确定的用户标识 对应的用户信息,具体包括:根据保存的、用户信息的第一可信度与第二可信度,确定该用户信息的总可信度,根据所述总可信度,获取总可信度不低于预设可信度阈值的用户信息。
在本申请实施例中,可以采用权重的方式确定用户信息的总可信度。也即:
R=w1r1+w2r2
其中,R是用户信息的总可信度;
r1是用户信息的第一可信度;
w1是第一可信度的权重;
r2是用户信息的第二可信度;
w2是第二可信度的权重。
当然,w1和w2可以根据实际应用的需要进行调整设置,这里并不构成对本申请的限定。
需要说明的是,在实际应用中,用户标识可能具有较多的用户信息(如:用户的账号对应较多的用户标签,十几~几十个用户标签),如果将这些用户信息都获取,那么,在后续生成了AR图形数据后,这些用户信息都会显示在相应的AR设备中,这样一来,显示的效果势必将较为密集而庞杂,影响用户观察相应的AR图形,也就是说,并不适合将用户标识所对应的全部用户信息进行获取。
故作为本申请实施例中的一种方式,可在确定出上述的总可信度的基础上,可以根据总可信度,来筛选各用户信息。例如:假设预设的可信度阈值为8,那么,针对用户Y的账号userY的所有用户标签,就会将总可信度不低于8的用户标签获取出来,这样,在后续AR设备显示AR图形时,显示在用户Y上的用户标签,均是总可信度不小于8的用户标签。
作为本申请实施例中的另一种方式下,用户标识所对应的各用户信息,可能是不同类型的用户信息,以用于标签为例:“爱喝拿铁”、“理财达人”这两个用户标签,反映了用户在不同场景下的两种特点。实际应用中,用户标识所对应的用户信息通常与相应的业务场景相关联,如上例:用户标签“爱喝拿铁”,更适合在饮品相关的场景中显示(如:用户处于咖啡店时);而用户标签“理财达人”更适合在金融相关的场景中显示(如:用户处于银行时)。也就是说,在获取用户信息的过程中,还可以根据相应的业务场景对用户标识所对应的多个用户信息进行筛选。
具体而言,获取确定的用户标识对应的用户信息,还包括:获取环境信息,根据所述环境信息,确定业务场景,根据所述业务场景,获取与所述业务场景相匹配的用户信息。
其中,上述的环境信息,包括但不限于:网络信息、位置信息、地理标识信息等。
这里的网络信息,具体可以是AR设备所接入的网络所对应的IP地址、网络名称等信息;位置信息可以是AR设备所处的位置的信息;地理位置标识信息,可以是诸如指示牌、具有射频识别(Radio Frequency Identification,RFID)功能的标识芯片等所提供的标识实际场所的信息。这里的实际场所,可以包括:酒店、企业、购物商场、机场等。
通过上述的环境信息,就可以确定出使用AR设备的用户当前所处的环境,进一步地,也就可以确定出AR设备所采集到的用户所处的环境,从而,可以据此来获取相应的用户信息。
具体例如:在某咖啡店中,员工M使用AR眼镜,该AR眼镜实时采集咖啡店内的实景图像。假设AR眼镜与相应的服务器之间通过无线网络连接,并且,服务器根据该咖啡店内的一种具有RFID功能的地理标识芯片,确定出该AR眼镜所处的实际场所为:咖啡店。
此时,用户X走入咖啡店,AR眼镜采集到该用户X的用户图像并发送给服务器,使得服务器确定出该用户X所有的用户标签,由于服务器已知该AR设备所处的实际场所为咖啡店,所以,服务器将从用户X所有的用户标签中,筛选出与咖啡店相关的业务场景的用户标签,假设,用户X的其中一个用户标签为:“爱喝拿铁”。所以,服务器就会获取该用户标签“爱喝拿铁”,并基于该用户标签生成相应的AR图形数据,并发送至AR眼镜中。此时,如图4a所示,在AR眼镜中,显示出用户X以及用户标签“爱喝拿铁”的AR图形。从而,员工M便可以直观地获知该用户X针对咖啡的喜好。
此外,处于该咖啡店中的其他用户,如果也使用AR设备,那么,在这些用户所使用的AR设备中,也会显示出与图4a相似的AR图形。
类似地,假设在银行中,员工N使用AR眼镜,该银行中也具有相类似的RFID地理标识,那么,服务器便可通过RFID地理标识,确定出该AR眼镜所在的实际场所为:银行。那么,假设用户X走入该银行,员工N所使用的AR眼镜将采集到用户X的用户图像,并从用户X所对应的所有用户标签中,确定出与银行相关的金融类业务场景的用户标签,假设用户X的其中一个用户标签为“理财达人”,那么,服务器便会获取该用户标签,并基于该用户标签生成相应的AR图形数据。此时,如图4b所示,在AR眼镜中,显示出用户X以及用户标签“理财达人”的AR图形。从而,员工N便可以直观地获知该用户X针对金融业务的特点,后续可为该用户X推荐相应的理财产品。
从上述两个示例中可见,服务器可根据AR设备所在位置或实际场所的不同(即,环境信息的不同),确定不同的业务场景,以便从该AR设备所采集到的用户图像所对应的用户信息中,获取符合该业务场景的用户信息。显然,这样的方式能够有效提升AR设备在显示用户信息AR图形时 的智能化。
在实际应用中,上述两种获取用户标识对应的用户信息的方式(一种通过用户信息的总可信度,另一种通过环境信息确定出相应的业务场景),既可以单独使用,也可以结合使用,对于结合使用的情况而言,在实际应用时可以获取到与环境信息相匹配、且总可信度不低于预设的可信度阈值的用户信息。显然,这样能够获取到更加准确地用户信息。当然,这里并不构成对本申请的限定。
另外,对于任一用户而言,均可以针对其他用户编辑相应的用户信息,也可以接受其他用户对该用户编辑的用户信息。也就是说,第一用户针对第二用户编辑相应的用户信息,那么,在该场景下,所述方法还包括:确定所述第一用户对应的用户标识,记录由所述第一用户所编辑的用户信息与第一用户的用户标识之间的编辑关系,并保存。
换言之,在用户之间互相编辑用户信息的情况下,服务器会记录某个用户信息是“谁”编辑的。基于此,若所述的图像由第一用户的终端实时采集并发送,此时,获取确定的用户标识对应的用户信息,具体包括:确定第二用户的用户标识所对应的各用户信息,在所述第二用户的用户标识所对应的各用户信息中,获取与第一用户的用户标识具有编辑关系的用户信息。
具体例如:对于某咖啡店而言,根据不同顾客的消费习惯,对不同的顾客编辑了相应的用户标签。假设,针对用户Y所编辑的用户标签为“爱喝拿铁”和“咖啡不加热”,该编辑关系将保存在服务器中。
假设某时刻,该用户Y走入该咖啡店,咖啡店员工M正在使用AR 眼镜。此时,AR眼镜采集到该用户Y的用户图像,通过与服务器之间的网络连接将采集到的图像发送给服务器,服务器确定出该用户图像就是用户Y,并且,根据已经保存的上述编辑关系,获取用户标签:“爱喝拿铁”和“咖啡不加热”。
换言之,即使用户Y拥有多个用户标签,其中的某些用户标签可能由其他的咖啡店的员工所编辑,服务器仍只会获取员工M所在的咖啡店所编辑的上述两个用户标签。
所以,如图5所示,员工M所使用的AR眼镜中,将显示出用户Y的两个用户标签“爱喝拿铁”和“咖啡不加热”。进而员工M可以根据这两个用户标签为用户Y提供相应的服务。
作为本申请实施例中的一种额外方式,服务器可以根据用户信息的总可信度,对AR图形的显示效果进行调节,如:用户信息的可信度高,那么,就可以将该用户信息的文字尺寸设置较大,类似地,也可以设置用户信息的亮度、或色彩等。基于此,生成所述用户信息对应的AR图形数据,具体包括:根据所述用户标识对应的用户信息的总可信度,确定用户信息的显示状态参数,根据所述显示状态参数,生成包含所述用户信息及其显示状态参数的AR图形数据。
其中,所述显示状态参数包括:颜色参数、亮度参数、尺寸参数中的至少一种。
以上为本申请实施例提供的服务器侧的信息展示方法,在本申请实施例中,针对于终端侧,还提供一种信息展示方法,该方法具体包括以下步骤:
步骤一:实时采集图像并发送至服务器。
其中,所述图像中包含用户图像;以使得所述服务器确定出所述图像中包含的用户图像,确定所述用户图像对应的用户标识,根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息,根据获取的所述用户信息,生成所述用户信息对应的AR图形数据,并反馈给终端。
在本申请实施例中,终端可以是具有图像采集功能的设备,如前所述,这里不再具体赘述。
步骤二:接收服务器反馈的所述用户信息对应的AR图形数据。
步骤三:根据所述AR图像数据,在实时采集的图像中显示相应的AR图形,其中,所述AR图形实时跟随所述用户图像。
以上为本申请实施例提供的信息展示方法,基于同样的思路,本申请实施例还提供一种信息展示装置,如图6所示。
图6中的信息展示装置包括:
接收模块601,用于接收终端实时采集并发送的图像;
用户图像模块602,用于确定所述图像中包含的用户图像;
用户标识模块603,用于确定所述用户图像对应的用户标识;
用户信息模块604,用于根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息;
AR模块605,用于根据获取的所述用户信息,生成所述用户信息对应的增强现实AR图形数据,并反馈给终端,以使得所述终端根据接收到的所述AR图形数据,在实时采集的图像中显示相应的AR图形,其中,所 述AR图形实时跟随所述用户图像。
具体地,所述终端包括AR设备。在本申请实施例的一种方式中,所述AR设备至少包括:AR眼镜、AR头盔、具有AR功能的手机、具有AR功能的计算机、具有AR功能的机器人。
具体地,用户图像模块602,具体用于对所述图像进行人物识别,提取出该图像中包含的用户图像。
所述用户标识模块603,具体用于根据确定出的所述用户图像,提取图像特征,在预先存储的各用户标识分别对应的用户特征中,确定出与提取出的图像特征相匹配的用户特征,将确定出的用户特征对应的用户标识作为所述用户图像对应的用户标识。
预先存储的各用户特征包括:生物特征,所述生物特征包括:面部特征、指纹特征、掌纹特征、视网膜特征、人体轮廓特征、步态特征、声纹特征中的至少一种。
当所述生物特征包括面部特征时,接收模块601,具体用于接收终端实时采集并发送的多帧图像;
用户标识模块603,具体用于从各帧图像的用户图像中分别提取出面部特征,根据提取出的各面部特征,构建三维面部模型,在预先存储的各用户标识分别对应的三维面部模型中,确定与构建的三维面部模型相匹配的三维面部模型。
当所述生物特征包括步态特征时,所述接收模块601,具体用于接收终端实时采集并发送的多帧图像;
用户标识模块603,具体用于根据各帧图像中的用户图像,提取出步态特征,在预先存储的各用户标识分别对应的步态特征中,确定与提取出的步态特征相匹配的步态特征。
所述用户信息模块604,具体用于接收用户针对自身的用户标识所编辑的用户信息,根据该用户针对自身的用户标识所编辑的用户信息,建立该用户标识与用户信息之间的对应关系并保存;或
接收第一用户对第二用户所编辑的用户信息,根据该第一用户对第二用户所编辑的用户信息,建立第二用户的用户标识与该用户信息之间的对应关系并保存。
所述用户信息模块604,具体用于获取所述用户标识对应的历史数据,根据所述历史数据,确定待建立对应关系的用户信息与历史数据的匹配度,作为待建立对应关系的用户信息的第一可信度,保存待建立对应关系的用户信息的第一可信度,并建立待建立对应关系的用户信息与待建立对应关系的用户标识的对应关系。
所述装置还包括:互证模块606,用于针对保存的每个用户信息,确定对该用户信息执行指定操作的其他用户,针对每个其他用户,根据该其他用户的等级,确定该其他用户执行的指定操作对该用户信息产生的分值,根据针对每个其他用户确定出的分值,确定该用户信息的第二可信度,并保存;
其中,所述指定操作包括:关注操作、评论操作、点赞操作中的至少一种;
所述等级包括:账户等级、信用等级、关注度等级中的至少一种。
所述用户信息模块604,具体用于根据保存的、用户信息的第一可信度与第二可信度,确定该用户信息的总可信度,根据所述总可信度,获取总可信度不低于预设可信度阈值的用户信息。
所述用户信息模块604,还用于获取环境信息,根据所述环境信息,确定业务场景,根据所述业务场景,获取与所述业务场景相匹配的用户信 息;
其中,所述环境信息包括:网络信息、位置信息、地理标识信息中的至少一种。
所述装置还包括:编辑关系记录模块607,用于确定所述第一用户对应的用户标识,记录由所述第一用户所编辑的用户信息与第一用户的用户标识之间的编辑关系,并保存。
当所述图像由第一用户的终端实时采集并发送时,所述用户信息模块604,具体用于确定第二用户的用户标识所对应的各用户信息,在所述第二用户的用户标识所对应的各用户信息中,获取与第一用户的用户标识具有编辑关系的用户信息。
所述AR模块605,具体用于根据所述用户标识对应的用户信息的总可信度,确定用户信息的显示状态参数,根据所述显示状态参数,生成包含所述用户信息及其显示状态参数的AR图形数据,其中,所述显示状态参数包括:颜色参数、亮度参数、尺寸参数中的至少一种。
基于同样的思路,本申请实施例还提供一种信息展示装置,如图7所示。所述装置包括:
采集模块701,用于实时采集图像并发送至服务器,其中,所述图像中包含用户图像;以使得所述服务器确定出所述图像中包含的用户图像,确定所述用户图像对应的用户标识,根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息,根据获取的所述用户信息,生成所述用户信息对应的AR图形数据,并反馈给终端;
接收模块702,用于接收服务器反馈的所述用户信息对应的AR图形数据;
显示模块703,用于根据所述AR图像数据,在实时采集的图像中显示相应的AR图形,其中,所述AR图形实时跟随所述用户图像。
结合上述内容,在本申请实施例中,还提供一种信息展示系统,如图8所示。该系统包括:信息展示装置80和终端81,其中,
终端81,用于实时采集图像并发送至信息展示装置80,根据接收到的所述信息展示装置80反馈的与所述图像中包含的用户图像所对应的AR图形数据,在实时采集的图像中显示相应的AR图形,其中,所述AR图形实时跟随所述用户图像。
信息展示装置80,包括:
AR智能模块801,用于接收终端实时采集的图像,以及,生成所述图像中所包含的用户信息所对应的AR图形数据。
识别校验模块802,用于针对接收到的图像,确定出所述图像中包含的用户图像,并确定所述用户图像对应的用户标识。
标签管理模块803,用于根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息。
大数据风险控制模块804,用于获取所述用户标识对应的历史数据,根据所述历史数据,确定待建立对应关系的用户信息与历史数据的匹配度,作为待建立对应关系的用户信息的第一可信度,保存待建立对应关系的用户信息的第一可信度,并建立待建立对应关系的用户信息与待建立对应关系的用户标识的对应关系。
互证模块805,用于针对保存的每个用户信息,确定对该用户信息执行指定操作的其他用户,针对每个其他用户,根据该其他用户的等级,确定该其他用户执行的指定操作对该用户信息产生的分值,根据针对每个其 他用户确定出的分值,确定该用户信息的第二可信度,并保存。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (33)

  1. 一种信息展示方法,其特征在于,包括:
    接收终端实时采集并发送的图像;
    确定所述图像中包含的用户图像;
    确定所述用户图像对应的用户标识;
    根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息;
    根据获取的所述用户信息,生成所述用户信息对应的增强现实AR图形数据,并反馈给终端,以使得所述终端根据接收到的所述AR图形数据,在实时采集的图像中显示相应的AR图形,其中,所述AR图形实时跟随所述用户图像。
  2. 如权利要求1所述的方法,其特征在于,所述终端包括AR设备;
    其中,所述AR设备至少包括:AR眼镜、AR头盔、具有AR功能的手机、具有AR功能的计算机、具有AR功能的机器人。
  3. 如权利要求1或2所述的方法,其特征在于,确定所述图像中包含的用户图像,具体包括:
    对所述图像进行人物识别,提取出该图像中包含的用户图像。
  4. 如权利要求1或2所述的方法,其特征在于,确定所述用户图像对应的用户标识,具体包括:
    根据确定出的所述用户图像,提取图像特征;
    在预先存储的各用户标识分别对应的用户特征中,确定出与提取出的图像特征相匹配的用户特征;
    将确定出的用户特征对应的用户标识作为所述用户图像对应的用户标识。
  5. 如权利要求4所述的方法,其特征在于,预先存储的各用户特征包括:生物特征;
    所述生物特征包括:面部特征、指纹特征、掌纹特征、视网膜特征、人体轮廓特征、步态特征、声纹特征中的至少一种。
  6. 如权利要求5所述的方法,其特征在于,当所述生物特征包括面部特征时,接收终端实时采集并发送的图像,具体包括:
    接收终端实时采集并发送的多帧图像;
    在预先存储的各用户标识分别对应的用户特征中,确定出与提取出的图像特征相匹配的用户特征,具体包括:
    从各帧图像的用户图像中分别提取出面部特征;
    根据提取出的各面部特征,构建三维面部模型;
    在预先存储的各用户标识分别对应的三维面部模型中,确定与构建的三维面部模型相匹配的三维面部模型。
  7. 如权利要求5所述的方法,其特征在于,当所述生物特征包括步态特征时,接收终端实时采集并发送的图像,具体包括:
    接收终端实时采集并发送的多帧图像;
    在预先存储的各用户标识分别对应的用户特征中,确定出与提取出的图像特征相匹配的用户特征,具体包括:
    根据各帧图像中的用户图像,提取出步态特征;
    在预先存储的各用户标识分别对应的步态特征中,确定与提取出的步态特征相匹配的步态特征。
  8. 如权利要求1或2所述的方法,其特征在于,预先保存各用户标识与用户信息的对应关系,具体包括:
    接收用户针对自身的用户标识所编辑的用户信息,根据该用户针对自身的用户标识所编辑的用户信息,建立该用户标识与用户信息之间的对应关系并保存;或
    接收第一用户对第二用户所编辑的用户信息,根据该第一用户对第二用户所编辑的用户信息,建立第二用户的用户标识与该用户信息之间的对应关系并保存。
  9. 如权利要求8所述的方法,其特征在于,建立用户标识与用户信息之间的对应关系并保存,具体包括:
    获取所述用户标识对应的历史数据;
    根据所述历史数据,确定待建立对应关系的用户信息与历史数据的匹配度,作为待建立对应关系的用户信息的第一可信度;
    保存待建立对应关系的用户信息的第一可信度,并建立待建立对应关系的用户信息与待建立对应关系的用户标识的对应关系。
  10. 如权利要求9所述的方法,其特征在于,所述方法还包括:
    针对保存的每个用户信息,确定对该用户信息执行指定操作的其他用户;
    针对每个其他用户,根据该其他用户的等级,确定该其他用户执行的指定操作对该用户信息产生的分值;
    根据针对每个其他用户确定出的分值,确定该用户信息的第二可信度,并保存;
    其中,所述指定操作包括:关注操作、评论操作、点赞操作中的至少一种;
    所述等级包括:账户等级、信用等级、关注度等级中的至少一种。
  11. 如权利要求10所述的方法,其特征在于,获取确定的用户标识对应的用户信息,具体包括:
    根据保存的、用户信息的第一可信度与第二可信度,确定该用户信息的总可信度;
    根据所述总可信度,获取总可信度不低于预设可信度阈值的用户信息。
  12. 如权利要求11所述的方法,其特征在于,获取确定的用户标识对应的用户信息,还包括:
    获取环境信息;
    根据所述环境信息,确定业务场景;
    根据所述业务场景,获取与所述业务场景相匹配的用户信息;
    其中,所述环境信息包括:网络信息、位置信息、地理标识信息中的 至少一种。
  13. 如权利要求8所述的方法,其特征在于,所述方法还包括:
    确定所述第一用户对应的用户标识;
    记录由所述第一用户所编辑的用户信息与第一用户的用户标识之间的编辑关系,并保存。
  14. 如权利要求13所述的方法,其特征在于,所述图像由第一用户的终端实时采集并发送;
    获取确定的用户标识对应的用户信息,具体包括:
    确定第二用户的用户标识所对应的各用户信息;
    在所述第二用户的用户标识所对应的各用户信息中,获取与第一用户的用户标识具有编辑关系的用户信息。
  15. 如权利要求11所述的方法,其特征在于,生成所述用户信息对应的AR图形数据,具体包括:
    根据所述用户标识对应的用户信息的总可信度,确定用户信息的显示状态参数;
    根据所述显示状态参数,生成包含所述用户信息及其显示状态参数的AR图形数据;
    其中,所述显示状态参数包括:颜色参数、亮度参数、尺寸参数中的至少一种。
  16. 一种信息展示方法,其特征在于,包括:
    实时采集图像并发送至服务器,其中,所述图像中包含用户图像;以使得所述服务器确定出所述图像中包含的用户图像,确定所述用户图像对应的用户标识,根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息,根据获取的所述用户信息,生成所述用户信息对应的AR图形数据,并反馈给终端;
    接收服务器反馈的所述用户信息对应的AR图形数据;
    根据所述AR图像数据,在实时采集的图像中显示相应的AR图形,其中,所述AR图形实时跟随所述用户图像。
  17. 一种信息展示装置,其特征在于,包括:
    接收模块,用于接收终端实时采集并发送的图像;
    用户图像模块,用于确定所述图像中包含的用户图像;
    用户标识模块,用于确定所述用户图像对应的用户标识;
    用户信息模块,用于根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息;
    AR模块,用于根据获取的所述用户信息,生成所述用户信息对应的增强现实AR图形数据,并反馈给终端,以使得所述终端根据接收到的所述AR图形数据,在实时采集的图像中显示相应的AR图形,其中,所述AR图形实时跟随所述用户图像。
  18. 如权利要求17所述的装置,其特征在于,所述终端包括AR设备;
    其中,所述AR设备至少包括:AR眼镜、AR头盔、具有AR功能的手机、具有AR功能的计算机、具有AR功能的机器人。
  19. 如权利要求17或18所述的装置,其特征在于,所述用户图像模块,具体用于对所述图像进行人物识别,提取出该图像中包含的用户图像。
  20. 如权利要求17或18所述的装置,其特征在于,所述用户标识模块,具体用于:根据确定出的所述用户图像,提取图像特征,在预先存储的各用户标识分别对应的用户特征中,确定出与提取出的图像特征相匹配的用户特征,将确定出的用户特征对应的用户标识作为所述用户图像对应的用户标识。
  21. 如权利要求20所述的装置,其特征在于,预先存储的各用户特征包括:生物特征;
    所述生物特征包括:面部特征、指纹特征、掌纹特征、视网膜特征、人体轮廓特征、步态特征、声纹特征中的至少一种。
  22. 如权利要求21所述的装置,其特征在于,当所述生物特征包括面部特征时,所述接收模块,具体用于接收终端实时采集并发送的多帧图像;
    所述用户标识模块,具体用于从各帧图像的用户图像中分别提取出面部特征,根据提取出的各面部特征,构建三维面部模型,在预先存储的各用户标识分别对应的三维面部模型中,确定与构建的三维面部模型相匹配的三维面部模型。
  23. 如权利要求21所述的装置,其特征在于,当所述生物特征包括步态特征时,所述接收模块,具体用于接收终端实时采集并发送的多帧图像;
    所述用户标识模块,具体用于根据各帧图像中的用户图像,提取出步 态特征,在预先存储的各用户标识分别对应的步态特征中,确定与提取出的步态特征相匹配的步态特征。
  24. 如权利要求17或18所述的装置,其特征在于,所述用户信息模块,具体用于:接收用户针对自身的用户标识所编辑的用户信息,根据该用户针对自身的用户标识所编辑的用户信息,建立该用户标识与用户信息之间的对应关系并保存;或
    接收第一用户对第二用户所编辑的用户信息,根据该第一用户对第二用户所编辑的用户信息,建立第二用户的用户标识与该用户信息之间的对应关系并保存。
  25. 如权利要求24所述的装置,其特征在于,所述用户信息模块,具体用于获取所述用户标识对应的历史数据,根据所述历史数据,确定待建立对应关系的用户信息与历史数据的匹配度,作为待建立对应关系的用户信息的第一可信度,保存待建立对应关系的用户信息的第一可信度,并建立待建立对应关系的用户信息与待建立对应关系的用户标识的对应关系。
  26. 如权利要求25所述的装置,其特征在于,所述装置还包括:互证模块,用于针对保存的每个用户信息,确定对该用户信息执行指定操作的其他用户,针对每个其他用户,根据该其他用户的等级,确定该其他用户执行的指定操作对该用户信息产生的分值,根据针对每个其他用户确定出的分值,确定该用户信息的第二可信度,并保存;
    其中,所述指定操作包括:关注操作、评论操作、点赞操作中的至少一种;
    所述等级包括:账户等级、信用等级、关注度等级中的至少一种。
  27. 如权利要求26所述的装置,其特征在于,所述用户信息模块,具体用于根据保存的、用户信息的第一可信度与第二可信度,确定该用户信息的总可信度,根据所述总可信度,获取总可信度不低于预设可信度阈值的用户信息。
  28. 如权利要求27所述的装置,其特征在于,所述用户信息模块,还用于获取环境信息,根据所述环境信息,确定业务场景,根据所述业务场景,获取与所述业务场景相匹配的用户信息;
    其中,所述环境信息包括:网络信息、位置信息、地理标识信息中的至少一种。
  29. 如权利要求24所述的装置,其特征在于,所述装置还包括:编辑关系记录模块,用于确定所述第一用户对应的用户标识,记录由所述第一用户所编辑的用户信息与第一用户的用户标识之间的编辑关系,并保存。
  30. 如权利要求29所述的装置,其特征在于,当所述图像由第一用户的终端实时采集并发送时,所述用户信息模块,具体用于确定第二用户的用户标识所对应的各用户信息,在所述第二用户的用户标识所对应的各用户信息中,获取与第一用户的用户标识具有编辑关系的用户信息。
  31. 如权利要求27所述的装置,其特征在于,所述AR模块,具体用于根据所述用户标识对应的用户信息的总可信度,确定用户信息的显示状态参数,根据所述显示状态参数,生成包含所述用户信息及其显示状态参数的AR图形数据,其中,所述显示状态参数包括:颜色参数、亮度参数、尺寸参数中的至少一种。
  32. 一种信息展示装置,其特征在于,包括:
    采集模块,用于实时采集图像并发送至服务器,其中,所述图像中包含用户图像;以使得所述服务器确定出所述图像中包含的用户图像,确定所述用户图像对应的用户标识,根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息,根据获取的所述用户信息,生成所述用户信息对应的AR图形数据,并反馈给终端;
    接收模块,用于接收服务器反馈的所述用户信息对应的AR图形数据;
    显示模块,用于根据所述AR图像数据,在实时采集的图像中显示相应的AR图形,其中,所述AR图形实时跟随所述用户图像。
  33. 一种信息展示系统,其特征在于,包括:
    终端,用于实时采集图像并发送至信息展示装置,根据接收到的所述信息展示装置反馈的与所述图像中包含的用户图像所对应的AR图形数据,在实时采集的图像中显示相应的AR图形,其中,所述AR图形实时跟随所述用户图像;
    信息展示装置,包括:
    AR智能模块,用于接收终端实时采集的图像,以及,生成所述图像中所包含的用户信息所对应的AR图形数据;
    识别校验模块,用于针对接收到的图像,确定出所述图像中包含的用户图像,并确定所述用户图像对应的用户标识;
    标签管理模块,用于根据预先保存的各用户标识与用户信息的对应关系,获取确定的用户标识对应的用户信息,
    大数据风险控制模块,用于获取所述用户标识对应的历史数据,根据 所述历史数据,确定待建立对应关系的用户信息与历史数据的匹配度,作为待建立对应关系的用户信息的第一可信度,保存待建立对应关系的用户信息的第一可信度,并建立待建立对应关系的用户信息与待建立对应关系的用户标识的对应关系;
    互证模块,用于针对保存的每个用户信息,确定对该用户信息执行指定操作的其他用户,针对每个其他用户,根据该其他用户的等级,确定该其他用户执行的指定操作对该用户信息产生的分值,根据针对每个其他用户确定出的分值,确定该用户信息的第二可信度,并保存。
PCT/CN2017/077400 2016-03-29 2017-03-20 一种信息展示方法、装置及系统 WO2017167060A1 (zh)

Priority Applications (13)

Application Number Priority Date Filing Date Title
AU2017243515A AU2017243515C1 (en) 2016-03-29 2017-03-20 Information display method, device and system
CA3019224A CA3019224C (en) 2016-03-29 2017-03-20 Information display method, device, and system
MX2018011850A MX2018011850A (es) 2016-03-29 2017-03-20 Metodo, dispositivo y sistema para despliegue de informacion.
KR1020187031087A KR102293008B1 (ko) 2016-03-29 2017-03-20 정보 디스플레이 방법, 디바이스, 및 시스템
SG11201808351QA SG11201808351QA (en) 2016-03-29 2017-03-20 Information display method, device and system
MYPI2018703488A MY189680A (en) 2016-03-29 2017-03-20 Information display method, device, and system
RU2018137829A RU2735617C2 (ru) 2016-03-29 2017-03-20 Способ, устройство и система отображения информации
EP17773090.0A EP3438849A4 (en) 2016-03-29 2017-03-20 INFORMATION DISPLAY PROCESS, DEVICE AND SYSTEM
JP2018551862A JP6935421B2 (ja) 2016-03-29 2017-03-20 情報の表示方法、デバイス、及びシステム
BR112018069970A BR112018069970A2 (pt) 2016-03-29 2017-03-20 métodos de exibição de informações, dispositivos de exibição de informações e sistema de exibição de informações
US16/142,851 US10691946B2 (en) 2016-03-29 2018-09-26 Information display method, device, and system
PH12018502093A PH12018502093A1 (en) 2016-03-29 2018-09-28 Information display method, device and system
US16/882,847 US11036991B2 (en) 2016-03-29 2020-05-26 Information display method, device, and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610186690.3 2016-03-29
CN201610186690.3A CN107239725B (zh) 2016-03-29 2016-03-29 一种信息展示方法、装置及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/142,851 Continuation US10691946B2 (en) 2016-03-29 2018-09-26 Information display method, device, and system

Publications (1)

Publication Number Publication Date
WO2017167060A1 true WO2017167060A1 (zh) 2017-10-05

Family

ID=59962604

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/077400 WO2017167060A1 (zh) 2016-03-29 2017-03-20 一种信息展示方法、装置及系统

Country Status (15)

Country Link
US (2) US10691946B2 (zh)
EP (1) EP3438849A4 (zh)
JP (1) JP6935421B2 (zh)
KR (1) KR102293008B1 (zh)
CN (1) CN107239725B (zh)
AU (1) AU2017243515C1 (zh)
BR (1) BR112018069970A2 (zh)
CA (1) CA3019224C (zh)
MX (1) MX2018011850A (zh)
MY (1) MY189680A (zh)
PH (1) PH12018502093A1 (zh)
RU (1) RU2735617C2 (zh)
SG (1) SG11201808351QA (zh)
TW (1) TWI700612B (zh)
WO (1) WO2017167060A1 (zh)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239725B (zh) 2016-03-29 2020-10-16 阿里巴巴集团控股有限公司 一种信息展示方法、装置及系统
US10242477B1 (en) * 2017-01-16 2019-03-26 Snap Inc. Coded vision system
CN107895325A (zh) * 2017-11-27 2018-04-10 启云科技股份有限公司 社群信息链接系统
CN109842790B (zh) 2017-11-29 2021-02-26 财团法人工业技术研究院 影像信息显示方法与显示器
TWI702531B (zh) * 2017-11-29 2020-08-21 財團法人工業技術研究院 影像資訊顯示方法、影像資訊顯示系統與顯示器
CN107958492A (zh) * 2017-11-30 2018-04-24 中国铁道科学研究院电子计算技术研究所 一种基于人脸识别的身份验证方法及装置
CN107894842A (zh) * 2017-12-19 2018-04-10 北京盈拓文化传媒有限公司 增强现实场景复原方法、终端及计算机可读存储介质
CN110555171B (zh) * 2018-03-29 2024-04-30 腾讯科技(深圳)有限公司 一种信息处理方法、装置、存储介质及系统
CN108510437B (zh) * 2018-04-04 2022-05-17 科大讯飞股份有限公司 一种虚拟形象生成方法、装置、设备以及可读存储介质
JP6542445B1 (ja) * 2018-07-31 2019-07-10 株式会社 情報システムエンジニアリング 情報提供システム及び情報提供方法
CN109102874A (zh) * 2018-08-06 2018-12-28 百度在线网络技术(北京)有限公司 基于ar技术的医疗处理方法、装置、设备和存储介质
CN109191180A (zh) * 2018-08-06 2019-01-11 百度在线网络技术(北京)有限公司 评价的获取方法及装置
TWI691891B (zh) * 2018-09-07 2020-04-21 財團法人工業技術研究院 多重目標物資訊顯示方法及裝置
CN111046704B (zh) * 2018-10-12 2023-05-09 杭州海康威视数字技术股份有限公司 存储身份识别信息的方法和装置
TR201909402A2 (tr) * 2019-06-25 2021-01-21 Havelsan Hava Elektronik Sanayi Ve Ticaret Anonim Sirketi Servi̇s yöneli̇mli̇ bi̇r artirilmiş gerçekli̇k altyapisi si̇stemi̇
US20200410764A1 (en) * 2019-06-28 2020-12-31 Snap Inc. Real-time augmented-reality costuming
CN110673767A (zh) * 2019-08-19 2020-01-10 杨少波 一种信息显示方法及装置
KR20210057525A (ko) * 2019-11-12 2021-05-21 삼성전자주식회사 디스플레이의 속성을 변경하기 위한 전자 장치 및 전자 장치에서의 동작 방법
CN111104927B (zh) * 2019-12-31 2024-03-22 维沃移动通信有限公司 一种目标人物的信息获取方法及电子设备
CN111178305A (zh) * 2019-12-31 2020-05-19 维沃移动通信有限公司 信息显示方法及头戴式电子设备
US11410359B2 (en) * 2020-03-05 2022-08-09 Wormhole Labs, Inc. Content and context morphing avatars
CN112069480A (zh) * 2020-08-06 2020-12-11 Oppo广东移动通信有限公司 显示方法、装置、存储介质及可穿戴设备
CN112114667A (zh) * 2020-08-26 2020-12-22 济南浪潮高新科技投资发展有限公司 一种基于双目摄像头和vr设备的ar显示方法及系统
US11720896B1 (en) 2020-12-10 2023-08-08 Wells Fargo Bank, N.A. Apparatuses, computer-implemented methods, and computer program products for proximate financial transactions
KR20240021555A (ko) * 2022-08-10 2024-02-19 삼성전자주식회사 Ar 단말의 장치 타입에 따른 컨텐츠 제공 방법 및 장치
US20240078759A1 (en) * 2022-09-01 2024-03-07 Daekun Kim Character and costume assignment for co-located users
CN117291852A (zh) * 2023-09-07 2023-12-26 上海铱奇科技有限公司 一种基于ar的信息合成方法及系统

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567617A (zh) * 2010-10-19 2012-07-11 株式会社泛泰 用于提供增强现实信息的装置和方法
CN103412953A (zh) * 2013-08-30 2013-11-27 苏州跨界软件科技有限公司 基于增强现实的社交方法

Family Cites Families (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2801362B2 (ja) * 1990-05-24 1998-09-21 日本電信電話株式会社 個人識別装置
JP2004286447A (ja) * 2003-03-19 2004-10-14 Nippon Telegr & Teleph Corp <Ntt> 道路情報表示システムおよびその方法
US8365147B2 (en) * 2008-02-27 2013-01-29 Accenture Global Services Limited Test script transformation architecture
US9143573B2 (en) * 2008-03-20 2015-09-22 Facebook, Inc. Tag suggestions for images on online social networks
US20100077431A1 (en) * 2008-09-25 2010-03-25 Microsoft Corporation User Interface having Zoom Functionality
US8988432B2 (en) * 2009-11-05 2015-03-24 Microsoft Technology Licensing, Llc Systems and methods for processing an image for target tracking
US20120249797A1 (en) * 2010-02-28 2012-10-04 Osterhout Group, Inc. Head-worn adaptive display
US9894116B2 (en) * 2012-04-12 2018-02-13 Intel Corporation Techniques for augmented social networking
KR101554776B1 (ko) * 2012-05-11 2015-09-22 안강석 피사체의 촬영을 통한 피사체 소스정보의 제공방법 및 이를 위한 서버와 휴대용 단말기
JP2013238991A (ja) * 2012-05-14 2013-11-28 Sony Corp 情報処理装置、情報処理方法及びプログラム
JP2014035642A (ja) * 2012-08-08 2014-02-24 Canon Inc 表示装置及びその制御方法、表示システム、プログラム
US10209946B2 (en) * 2012-08-23 2019-02-19 Red Hat, Inc. Augmented reality personal identification
US9338622B2 (en) * 2012-10-04 2016-05-10 Bernt Erik Bjontegard Contextually intelligent communication systems and processes
CN102981761A (zh) * 2012-11-13 2013-03-20 广义天下文化传播(北京)有限公司 用于移动终端应用程序的触发式交互方法
CN103970804B (zh) * 2013-02-06 2018-10-30 腾讯科技(深圳)有限公司 一种信息查询方法及装置
CN103294779A (zh) * 2013-05-13 2013-09-11 北京百度网讯科技有限公司 对象信息获取方法及设备
KR102098058B1 (ko) * 2013-06-07 2020-04-07 삼성전자 주식회사 뷰 모드에서 정보 제공 방법 및 장치
CN103577516A (zh) * 2013-07-01 2014-02-12 北京百纳威尔科技有限公司 内容显示方法和装置
JP2015146550A (ja) * 2014-02-04 2015-08-13 ソニー株式会社 情報処理装置、情報処理方法、及びプログラム
JP2015162012A (ja) * 2014-02-26 2015-09-07 沖電気工業株式会社 顔照合装置及び顔照合方法並びにプログラム
US20150278214A1 (en) * 2014-04-01 2015-10-01 Tableau Software, Inc. Systems and Methods for Ranking Data Visualizations Using Different Data Fields
JP6203116B2 (ja) * 2014-05-20 2017-09-27 ヤフー株式会社 公証提供装置、公証提供方法及びプログラム
KR101629553B1 (ko) * 2014-05-29 2016-06-14 주식회사 듀얼어퍼처인터네셔널 이동 단말기에서 디스플레이 화면 제어 장치 및 그 방법
US9576058B2 (en) * 2014-09-16 2017-02-21 Facebook, Inc. Determining accuracies with which types of user identifying information identify online system users
US20160196584A1 (en) * 2015-01-06 2016-07-07 Facebook, Inc. Techniques for context sensitive overlays
US20160260064A1 (en) * 2015-03-03 2016-09-08 Gradberry Inc. Systems and methods for a career and courses portal
US9760790B2 (en) * 2015-05-12 2017-09-12 Microsoft Technology Licensing, Llc Context-aware display of objects in mixed environments
CN105354334B (zh) * 2015-11-27 2019-04-26 广州视源电子科技股份有限公司 一种基于智能镜子的信息发布方法和智能镜子
CN105338117B (zh) * 2015-11-27 2018-05-29 亮风台(上海)信息科技有限公司 用于生成ar应用和呈现ar实例的方法、设备与系统
CN107239725B (zh) * 2016-03-29 2020-10-16 阿里巴巴集团控股有限公司 一种信息展示方法、装置及系统
US10212157B2 (en) * 2016-11-16 2019-02-19 Bank Of America Corporation Facilitating digital data transfers using augmented reality display devices
KR20190093624A (ko) * 2016-12-06 2019-08-09 돈 엠. 구룰 연대기-기반 검색 엔진을 위한 시스템 및 방법
US10109096B2 (en) * 2016-12-08 2018-10-23 Bank Of America Corporation Facilitating dynamic across-network location determination using augmented reality display devices
US10650597B2 (en) * 2018-02-06 2020-05-12 Servicenow, Inc. Augmented reality assistant
US10521685B2 (en) * 2018-05-29 2019-12-31 International Business Machines Corporation Augmented reality marker de-duplication and instantiation using marker creation information
US11170035B2 (en) * 2019-03-29 2021-11-09 Snap Inc. Context based media curation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567617A (zh) * 2010-10-19 2012-07-11 株式会社泛泰 用于提供增强现实信息的装置和方法
CN103412953A (zh) * 2013-08-30 2013-11-27 苏州跨界软件科技有限公司 基于增强现实的社交方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3438849A4 *

Also Published As

Publication number Publication date
KR102293008B1 (ko) 2021-08-27
RU2735617C2 (ru) 2020-11-05
PH12018502093A1 (en) 2019-07-24
CN107239725A (zh) 2017-10-10
US11036991B2 (en) 2021-06-15
TW201734712A (zh) 2017-10-01
BR112018069970A2 (pt) 2019-01-29
MY189680A (en) 2022-02-25
SG11201808351QA (en) 2018-10-30
RU2018137829A (ru) 2020-04-29
EP3438849A4 (en) 2019-10-09
CA3019224A1 (en) 2017-10-05
JP2019516175A (ja) 2019-06-13
CA3019224C (en) 2021-06-01
CN107239725B (zh) 2020-10-16
US20190026559A1 (en) 2019-01-24
TWI700612B (zh) 2020-08-01
JP6935421B2 (ja) 2021-09-15
RU2018137829A3 (zh) 2020-06-17
AU2017243515C1 (en) 2021-07-22
AU2017243515B2 (en) 2021-01-28
US20200285853A1 (en) 2020-09-10
EP3438849A1 (en) 2019-02-06
MX2018011850A (es) 2019-08-05
KR20180124126A (ko) 2018-11-20
AU2017243515A1 (en) 2018-10-18
US10691946B2 (en) 2020-06-23

Similar Documents

Publication Publication Date Title
WO2017167060A1 (zh) 一种信息展示方法、装置及系统
JP7091504B2 (ja) 顔認識アプリケーションにおけるフォールスポジティブの最小化のための方法および装置
US10900772B2 (en) Apparatus and methods for facial recognition and video analytics to identify individuals in contextual video streams
US9965494B2 (en) Sharing photos
CN103970804A (zh) 一种信息查询方法及装置
Lai et al. Design and implementation of an online social network with face recognition
CN115240238A (zh) 资源处理方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 11201808351Q

Country of ref document: SG

ENP Entry into the national phase

Ref document number: 3019224

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: MX/A/2018/011850

Country of ref document: MX

ENP Entry into the national phase

Ref document number: 2018551862

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112018069970

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2017243515

Country of ref document: AU

Date of ref document: 20170320

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20187031087

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2017773090

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017773090

Country of ref document: EP

Effective date: 20181029

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17773090

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 112018069970

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20180928