CN110555171B - Information processing method, device, storage medium and system - Google Patents

Information processing method, device, storage medium and system Download PDF

Info

Publication number
CN110555171B
CN110555171B CN201810272957.XA CN201810272957A CN110555171B CN 110555171 B CN110555171 B CN 110555171B CN 201810272957 A CN201810272957 A CN 201810272957A CN 110555171 B CN110555171 B CN 110555171B
Authority
CN
China
Prior art keywords
information
image
user identifier
display
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810272957.XA
Other languages
Chinese (zh)
Other versions
CN110555171A (en
Inventor
廖戈语
钟庆华
卢锟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810272957.XA priority Critical patent/CN110555171B/en
Publication of CN110555171A publication Critical patent/CN110555171A/en
Application granted granted Critical
Publication of CN110555171B publication Critical patent/CN110555171B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Marketing (AREA)
  • Artificial Intelligence (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Primary Health Care (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses an information processing method, an information processing device, a storage medium and an information processing system. According to the embodiment of the invention, the image to be identified is obtained, and the target face information is extracted from the image to be identified; the method comprises the steps of sending target face information to a server, and receiving a first user identifier obtained by the server according to matching of the target face information and tag information associated with the first user identifier; analyzing the label information; and displaying the parsed label information on the image to be recognized. The convenience of user operation is greatly improved, and the flexibility and diversity of information processing are improved.

Description

Information processing method, device, storage medium and system
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an information processing method, an information processing device, a storage medium, and a system.
Background
With the continuous popularization and development of the terminal, users increasingly depend on the terminal, and various applications can be installed on the terminal, wherein the instant messaging application is widely used, and the users can complete communication and interaction with friends through the instant messaging application, for example, view friend impressions of the friends.
In the prior art, a terminal generally displays a plurality of impression tags of a friend in a friend impression display interface of the friend. The impression labels of the friends are keyword evaluation of other users on the friends. By checking friend impressions of friends, the friends can be quickly and primarily known. Or the interesting interaction with the friends is completed by adding impression labels to the friends.
In the research and practice process of the prior art, the inventor discovers that in the prior art, the friend impressions of looking over and adding friends are based on a relation chain in a network, namely, the friends can look over and add friend impressions of friends only after becoming network friends with friends, so that the operation is complex, the limitation of interaction is large, and the information processing mode is single.
Disclosure of Invention
The embodiment of the invention provides an information processing method, an information processing device, a storage medium and an information processing system, which aim to improve the convenience of operation and the flexibility and diversity of information processing.
In order to solve the technical problems, the embodiment of the invention provides the following technical scheme:
An information processing method, comprising:
Acquiring an image to be identified, and extracting target face information from the image to be identified;
The target face information is sent to a server, and a first user identifier obtained by the server according to the target face information in a matching mode and tag information associated with the first user identifier are received;
Analyzing the label information;
And displaying the parsed label information on the image to be identified.
An information processing apparatus comprising:
The extraction unit is used for acquiring an image to be identified and extracting target face information from the image to be identified;
The receiving and transmitting unit is used for sending the target face information to a server and receiving a first user identifier obtained by the server according to the matching of the target face information and tag information associated with the first user identifier;
the analysis unit is used for analyzing the label information;
And the display unit is used for displaying the parsed label information on the image to be recognized.
In some embodiments, the extraction unit comprises:
the intercepting and determining subunit is used for acquiring an image to be recognized, recognizing a target face image in the image to be recognized, intercepting the target face image and determining the target face image as target face information; or (b)
The extraction and determination unit is used for acquiring an image to be recognized, recognizing a target face image in the image to be recognized, extracting face characteristic point information in the target face image, and determining the face characteristic point information as target face information.
In some embodiments, the information processing apparatus further includes a face information acquisition unit, an acquisition unit, and a binding transmission unit;
the face information acquisition unit is used for acquiring preset face information;
the acquisition unit is used for acquiring a second user identifier associated with the local client;
And the binding sending unit is used for binding the preset face information with the second user identifier and sending the bound preset face information with the second user identifier to a server.
In some embodiments, the information processing apparatus further includes an identification acquisition unit, a first judgment unit, an editing unit, and a first control display unit;
the identification acquisition unit is used for acquiring the received first user identification and acquiring a second user identification associated with the local client;
The first judging unit is used for judging whether the first user identifier is consistent with the second user identifier;
an editing unit, configured to set the displayed tag information to an editable state when it is determined that the first user identifier is identical to the second user identifier;
And the first control display unit is used for displaying a first control when the first user identifier is inconsistent with the second user identifier, and the first control is used for adding label information to a client associated with the first user identifier.
In some embodiments, the information processing apparatus further includes a second judging unit, a second control generating unit, and a third control generating unit;
The second judging unit is used for judging whether the first user identifier exists on the local client side;
the second control generation unit is used for generating a second control when the first user identifier exists on the local client, wherein the second control is used for sending a message to the client associated with the first user identifier;
And the third control generation unit is used for generating a third control when the first user identifier does not exist on the local client, and the third control is used for sending a friend adding request to the client associated with the first user identifier.
A storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the information processing method described above.
An information processing system, the system comprising: a terminal and a server;
the terminal comprises the information processing device;
The server is used for: and receiving target face information sent by a terminal, matching according to the target face information to obtain a first user identifier corresponding to the target face information and tag information associated with the first user identifier, and sending the first user identifier and the tag information associated with the first user identifier to the terminal.
According to the embodiment of the invention, the image to be identified is obtained, and the target face information is extracted from the image to be identified; the method comprises the steps of sending target face information to a server, and receiving a first user identifier obtained by the server according to matching of the target face information and tag information associated with the first user identifier; analyzing the label information; and displaying the parsed label information on the image to be recognized. The scheme can rapidly identify the target face information on the image, automatically acquire the label information of the user associated with the target face information according to the target face information, and display the label information on the image to be identified, so that compared with the existing scheme which can only rely on the network relationship for information interaction, the scheme can greatly improve the convenience of user operation and the flexibility and diversity of information processing.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a scenario of an information processing system provided by an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an information processing method according to an embodiment of the present invention;
FIG. 3 is another flow chart of an information processing method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an image processing interface provided by an embodiment of the present invention;
FIG. 5 is another schematic diagram of an image processing interface provided by an embodiment of the present invention;
FIG. 6 is another schematic diagram of an image processing interface provided by an embodiment of the present invention;
FIG. 7 is another schematic diagram of an image processing interface provided by an embodiment of the present invention;
FIG. 8 is a timing diagram of an information processing method according to an embodiment of the present invention;
FIG. 9 is another flow chart of an information processing method according to an embodiment of the present invention;
Fig. 10a is a schematic structural view of an information processing apparatus according to an embodiment of the present invention;
fig. 10b is another schematic structural view of an information processing apparatus according to an embodiment of the present invention;
fig. 10c is another schematic structural view of an information processing apparatus according to an embodiment of the present invention;
fig. 10d is another schematic structural view of an information processing apparatus according to an embodiment of the present invention;
fig. 11 is a schematic diagram of a structure of an information processing system according to an embodiment of the present invention.
Fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
The embodiment of the invention provides an information processing method, an information processing device, a storage medium and an information processing system.
Referring to fig. 1, fig. 1 is a schematic view of a scenario of an information processing system according to an embodiment of the present invention, including: the terminal 10 and the server 20 may be connected between the terminal 10 and the server 20 through a communication network, which includes a wireless network and a wired network, wherein the wireless network includes one or more of a wireless wide area network, a wireless local area network, a wireless metropolitan area network, and a wireless personal area network. The network includes network entities such as routers, gateways, etc., which are not shown. The terminal 10 may interact with the server 20 via a communication network, for example, an application (e.g., an instant messaging application) may be downloaded from the server 20.
The information processing system may include an information processing device, which may be integrated in a terminal having a storage unit and a microprocessor and having an operation capability, such as a tablet computer, a mobile phone, a notebook computer, and a desktop computer, in fig. 1, the terminal is the terminal 10 in fig. 1, and applications required by various users, such as an instant messaging application having an information interaction function, may be installed in the terminal 10. The terminal 10 may be configured to obtain an image to be identified, extract target face information from the image to be identified, send the target face information to the server 20, receive a first user identifier obtained by matching the server 20 according to the target face information, and tag information associated with the first user identifier, and parse the tag information through the terminal 10; and displaying the parsed label information on the image to be recognized, and the like.
The information processing system may further include a server 20, mainly configured to receive the target face information sent by the terminal 10, match the target face information according to the target face information, obtain a first user identifier corresponding to the target face information, and tag information associated with the first user identifier, and send the first user identifier and the tag information associated with the first user identifier to the terminal 10. The information processing system may further include a memory for storing an information base including an association relationship between the user identifier and the face information, an association relationship between the user identifier and the tag information, and the like, so that the server may acquire the first user identifier from the memory, and the tag information associated with the first user identifier may be transmitted to the terminal 10.
It should be noted that, the schematic view of the scenario of the information processing system shown in fig. 1 is only an example, and the information processing system and the scenario described in the embodiment of the present invention are for more clearly describing the technical solution of the embodiment of the present invention, and do not constitute a limitation on the technical solution provided by the embodiment of the present invention, and those skilled in the art can know that, with the evolution of the information processing system and the appearance of a new service scenario, the technical solution provided by the embodiment of the present invention is equally applicable to similar technical problems.
It will be appreciated that in the specific embodiments of the present application, related data such as user information is involved, and when the above embodiments of the present application are applied to specific products or technologies, user permissions or consents need to be obtained, and the collection, use and processing of related data need to comply with related laws and regulations and standards of related countries and regions.
The following will describe in detail.
Embodiment 1,
In this embodiment, description will be made from the viewpoint of an information processing apparatus which can be integrated in a terminal having a storage unit and a microprocessor mounted therein, such as a tablet computer, a mobile phone, or the like, and having an arithmetic capability.
An information processing method, comprising: acquiring an image to be identified, and extracting target face information from the image to be identified; the method comprises the steps of sending target face information to a server, and receiving a first user identifier obtained by the server according to matching of the target face information and tag information associated with the first user identifier; analyzing the label information; and displaying the parsed label information on the image to be recognized.
Referring to fig. 2, fig. 2 is a flowchart illustrating an information processing method according to an embodiment of the invention. The information processing method comprises the following steps:
In step 101, an image to be recognized is acquired, and target face information is extracted from the image to be recognized.
The image to be identified may be a picture in a video stream acquired in real time through a camera, or may be a picture cached or stored on a terminal, etc., and the Format of the image may be a BitMaP (BitMaP, BMP), a joint photographic experts group (Joint Photographic Expert Group, JPEG), a graphics interchange Format (GRAPHICS INTERCHANGE Format, GIF) Format, etc.
In some embodiments, the image to be identified may be obtained by opening a certain client on the terminal, such as an instant messaging (INSTANTMESSAGING, IM) client, and inputting a user identifier and a password, where the client may enter a display main interface corresponding to the user identifier, where the display main interface is the first interface displayed after logging in through the user identifier and the password. The main interface comprises a shortcut operation control, the shortcut operation control is a shortcut entry for triggering and acquiring an image to be identified, and when the shortcut operation control is detected to be clicked by a user, the camera component is called to acquire the image to be identified, and the acquired image to be identified is displayed on the display screen. Optionally, the image to be identified may further include a camera switching control and an album control, where the camera switching control is a shortcut entry for switching a front camera or a rear camera, and specifically, the user may obtain the image to be identified by clicking the camera switching control to switch the front camera or the rear camera. The album control is a shortcut entry for calling the album on the terminal, and concretely, a user can click the album control to call the album on the terminal, so that a certain picture in the album is selected as an image to be identified.
In some embodiments, the step of acquiring the image to be identified and extracting the target face information from the image to be identified may include:
acquiring an image to be recognized, recognizing a target face image in the image to be recognized, intercepting the target face image, and determining the target face image as target face information; or (b)
The method comprises the steps of obtaining an image to be recognized, recognizing a target face image in the image to be recognized, extracting face feature point information in the target face image, and determining the face feature point information as target face information.
The pattern features included in the face image are rich in ten directions, such as histogram features, color features, template features, structural features, haar features (the Haar features are features reflecting gray level changes of the image, and the pixel sub-modules calculate differences), and the like. Therefore, the characteristic scanning can be carried out on the image to be identified, and the target face image in the image to be identified is determined. Alternatively, the target face image may be highlighted with a rectangular frame on the image to be recognized, or may be highlighted with a circular annular frame on the image to be recognized.
In some embodiments, the step of acquiring the image to be identified and identifying the target face image in the image to be identified may include:
(1) Analyzing an image to be identified, and determining a face image on the image to be identified;
(2) Judging whether the number of the face images is a plurality of;
(3) Generating prompt information;
(4) And determining the face image as a target face image.
Because the image to be identified may contain a plurality of face images, the face feature information on the image to be identified is first scanned to determine all face images on the image to be identified.
Optionally, judging whether the number of the face images is multiple, and executing the step (3) when judging that the number of the face images is multiple, generating prompt information, wherein the prompt information is used for prompting a user to select a target face image, and receiving the target face image selected by the user according to the prompt information. Specifically, a popup prompt message can be generated to remind the user to determine a target face image, and the user can determine the face image as the target face image by clicking a certain face image in the images to be recognized. When the number of face images is judged not to be multiple, executing step (4), and determining the unique face image as the target face image.
Specifically, after determining the target face image by the above method, in an embodiment, the terminal may intercept the target face image and use the target face image as the target face information. In another embodiment, the terminal may perform image preprocessing such as gray-scale correction and noise filtering on the target face image, and extract the face feature points from the processed target face image. The facial feature points may include geometric descriptions of the local constituent points of the eyes, nose, mouth, chin, etc.
In step 102, the target face information is sent to a server, and a first user identifier obtained by the server according to the matching of the target face information and tag information associated with the first user identifier are received.
The first user identifier may include information such as a user name, a client account, an account of an instant messaging tool, an international mobile equipment identity (International Mobile Equipment Identity, IMEI), and/or a mailbox account.
It should be noted that, the server may include an installation package file of a certain client, and for better description, the installation package file of the instant messaging client is illustrated. The terminal can download the installation package file of the instant communication client from the server, and decompress and install the installation package file on the terminal to generate the local client installed on the terminal. The information of the local client is provided by the server, when the user opens the local client for the first time, the user can enter a login interface, the user can input a second user identification (such as an account number) and a password on the login interface to carry out verification login, if the user does not have the second user identification, the user can register, and the second user identification and the password information are stored in the server. Each user identifier is associated with the interactive information corresponding to the account, and the interactive information can comprise personal information, tag information, friend information, roaming interactive information and the like of the user. The tag information can be known as 'friend impression', and the keyword evaluation can be carried out on friends through the tag information function, and the friends can also carry out the keyword evaluation on themselves. Eventually, all the evaluations received by one user are gathered together and displayed to other friends through the tag information. The user identification (account) and the associated interaction information corresponding to the account are stored in the server.
Based on this, in some embodiments, before the step of acquiring the image to be identified and extracting the target face information from the image to be identified, the method may further include:
(1) Acquiring preset face information;
(2) Acquiring a second user identifier associated with a local client;
The second user identifier may include information such as a user name, a client account, an account of an instant messaging tool, an international mobile equipment identity (International Mobile Equipment Identity, IMEI), and/or a mailbox account.
(3) Binding the preset face information with the second user identifier, and sending the bound preset face information and the second user identifier to the server.
Specifically, the user may perform the pre-binding operation by opening the local client on the terminal, that is, by selecting a preset image, and extracting the face information, so as to obtain preset face information, and it needs to be specifically described that the preset face information may be the face information of the user.
Then, the terminal acquires a second user identifier associated with the local client, such as account information of the current login (i.e. account information of the user) on the local client, binds the second user identifier with the preset face information, packages and sends the bound preset face information and the second user identifier to the server, so that the server stores the binding relation between the preset face information and the second user identifier.
After the terminal extracts the target face information, the terminal sends the target face information to the server, and the binding relation between the face information and the user identifier is stored in the server in advance, so that after the server receives the target face information, the terminal can match according to the target face information to obtain a first user identifier (such as an account number) bound correspondingly to the target face information and tag information associated with the first user identifier, and the first user identifier and the tag information associated with the first user identifier are returned to the terminal.
In one embodiment, the server may return the first user identifier and the tag information associated with the first user identifier to the terminal, and may also return the profile information associated with the first user identifier to the terminal.
In step 103, the tag information is parsed.
The terminal can be integrated with a software development kit (Software Development Kit, SDK) with an augmented reality function, the augmented reality control is loaded through the software development kit, and the tag information is analyzed through the augmented reality control.
The augmented reality technology (Augmented Reality, AR) is a new technology for integrating real world information and virtual world information in a seamless manner, and is a technology for applying virtual information to the real world and perceiving the virtual information by human senses through simulation and superposition of the virtual information after simulation and simulation by scientific technologies such as a computer and the like so as to achieve sensory experience exceeding reality. Real environment and virtual object are superimposed on the same picture or space in real time and exist at the same time.
In some embodiments, the step of parsing the tag information may include:
(1) Analyzing the label information, and determining the number of the labels and the style configuration information corresponding to the labels;
(2) And analyzing the number of the labels and the style configuration information corresponding to the labels through the augmented reality control, and determining the display position information corresponding to the labels.
The tag information is analyzed, the number of tags corresponding to the tag information and style configuration information corresponding to each tag are determined, and the style configuration information is parameter information when the tag is displayed, such as parameter information of tag display size, font, display style and the like.
In some embodiments, the step of analyzing the tag information and determining the number of tags and the style configuration information corresponding to the tags may include: the label information is analyzed to obtain the number of labels corresponding to the label information, and obtain display style parameters, display color parameters, display size parameters, display font parameters and the like corresponding to the labels. The number of the labels is the sum of the number of keyword evaluation by other users on the client corresponding to the first user identifier. For example, the number of labels of the first user identifier is three, namely "gentle", "lovely" and "beautiful". The display style parameter is a display style of the label, namely a pattern style of the label frame, the display color parameter is a display color parameter of the label, namely a display color of the label frame, the display size parameter is a display size parameter of the label, namely a display size of the label frame, and the display font parameter is a display font parameter of fonts in the label, namely a display font type of the fonts in the label frame.
Since the tag is finally displayed in the form of augmented reality on the image to be recognized, it is necessary to arrange the display position of the tag. Further, the number of the labels is analyzed through the augmented reality control, the display style parameters, the display color parameters, the display size parameters and the display font parameters corresponding to the labels are analyzed, and the display position information corresponding to the labels is determined according to the analysis result. The more the number of labels, the denser the label display positions, the fewer the number of labels, and the looser the label display positions.
In step 104, the parsed tag information is displayed on the image to be identified.
The terminal displays the parsed label information on the image to be recognized in an augmented reality mode through an augmented reality control, namely, firstly, the pattern of the image to be recognized is analyzed, and a display pattern corresponding to the face image is determined. And secondly, the parsed label information is floated on a display layer corresponding to the face image in an augmented reality mode. In an embodiment, the terminal may further display the parsed tag information on the image to be recognized through Virtual Reality (VR) or other display forms.
In some embodiments, the step of displaying the parsed tag information on the image to be identified may include:
Initializing and loading the label according to a display style parameter (such as a pattern style of a label frame), a display color parameter (such as a display color of the label frame), a display size parameter (such as a display size of the label frame) and a display font parameter (such as a display font type of a font in the label frame) corresponding to the label so as to obtain a target label; and displaying the target label on the image to be identified according to the display position information.
The method comprises the steps of initializing and loading the label according to a pattern style of a label frame corresponding to the label, a display color of the label frame, a display size of the label frame and a display font type of a font in the label frame, and obtaining a target label corresponding to a first user identifier.
Then, the target tag is floated on the image to be recognized in an augmented reality form according to the display position information.
In one embodiment, the terminal may display the personal information together in the form of a tag on the image to be recognized, in addition to displaying the target tag in the form of augmented reality on the image to be recognized according to the display position information.
Optionally, the target tag and the personal information are preferentially displayed around the target face image.
In some embodiments, after the step of displaying the parsed tag information on the image to be recognized, the method may further include:
(1) Acquiring a received first user identifier and acquiring a second user identifier associated with a local client;
(2) Judging whether the first user identifier is consistent with the second user identifier;
(3) Setting the displayed tag information to an editable state;
(4) And displaying a first control, wherein the first control is used for adding label information to the client associated with the first user identification.
After the parsed tag information is actually displayed on the image to be identified in an augmented reality mode, a user can quickly and primarily know the first user through the displayed tag, and interaction efficiency of the user is improved.
And then the terminal acquires the received first user identification and acquires the second user identification associated with (logged on) the local client, judges whether the first user identification is consistent with the second user identification, and when judging that the first user identification is consistent with the second user identification, the terminal indicates that the current user is looking up the label information corresponding to the own account information, and executes the step (3), and the displayed label information is set to be in an editable state, namely, the user can delete the label or modify the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the label.
Further, when the first user identifier is inconsistent with the second user identifier, the fact that the current user is looking up tag information corresponding to account information of other users is indicated. And (4) executing the step, displaying a first control, wherein the first control is displayed at the bottom end of the image to be identified, the first control can display a word of labeling, when the user clicks the first control, a label selection option is popped up, and when the user selects a corresponding label, the selected label can be added to the client associated with the first user identification.
In some embodiments, after the first control is displayed, the method may further include:
(1.1) judging whether a first user identification exists on the local client side;
(1.2) generating a second control for sending a message to the client associated with the first user identification;
(1.3) generating a third control for sending a request to add friends to the client associated with the first user identification.
When the first user identifier exists on the local client, the friend relationship between the first user identifier and the second user identifier associated with the local client is indicated, step (1.2) is executed, a second control is generated, and the user can send a message to the client corresponding to the first user identifier by clicking the second control. When judging that the second user identifier does not exist on the local client, indicating that the first user identifier and the second user identifier associated with the local client are not in a friend relationship, executing the step (1.3) to generate a third control, and enabling a user to send a friend adding request to the client corresponding to the first user identifier by clicking the third control.
As can be seen from the above, according to the embodiment of the invention, the image to be identified is obtained, and the target face information is extracted from the image to be identified; the method comprises the steps of sending target face information to a server, and receiving a first user identifier obtained by the server according to matching of the target face information and tag information associated with the first user identifier; analyzing the label information; and displaying the parsed label information on the image to be recognized. The scheme can rapidly identify the target face information on the image, automatically acquire the label information of the user associated with the target face information according to the target face information, and display the label information on the image to be identified, so that compared with the existing scheme which can only rely on the network relationship for information interaction, the scheme can greatly improve the convenience of user operation and the flexibility and diversity of information processing.
Embodiment II,
The method described in accordance with embodiment one is described in further detail below by way of example.
In this embodiment, an example will be described in which a client is an instant messaging client, and the information processing apparatus is specifically integrated in a terminal.
Referring to fig. 3, fig. 3 is another flow chart of the information processing method according to the embodiment of the invention. The method flow may include:
In step 201, the terminal acquires preset face information, acquires a second user identifier associated with the local client, binds the preset face information with the second user identifier, and sends the bound preset face information and the bound second user identifier to the server.
It should be noted that, after the terminal downloads the installation package corresponding to the instant messaging client for decompression installation, an instant messaging client is correspondingly generated on the terminal, the instant messaging client installed on the terminal is also called a local client, an account can be logged in on the local client, the account information corresponding to the account can include interaction information corresponding to the account in the account information storage server, and the interaction information can include personal information, tag information, friend information, roaming interaction information and the like of the user. The tag information can be known as 'friend impression', and the keyword evaluation can be carried out on friends through the tag information function, and the friends can also carry out the keyword evaluation on themselves. Eventually, all the evaluations received by one user are gathered together and displayed to other friends through the tag information.
The user can operate the terminal to open the local client, and call the camera component to collect preset face information, wherein the preset face information can be the face information of the user. After the preset face information is acquired. The terminal acquires account information logged in on the local client, extracts a second user identifier (such as an account number) from the account information, binds the second user identifier with preset face information, and sends the second user identifier to the server.
For example, as shown in fig. 4, a user opens a local client through the terminal 10, invokes the camera component to collect a current image, and further performs a front-end camera or rear-end camera switching operation by clicking the camera switching control 11, where the terminal 10 performs feature scanning on the current image to determine a preset face image in the current image, and highlights the preset face image with the rectangular frame 12. Further, image preprocessing such as gray correction and noise filtering is performed on the preset face image, face characteristic point information is extracted from the preset face image after image preprocessing, and the face characteristic point information is determined to be preset face information. The terminal 10 obtains account information logged in on the local client, such as a second user identifier "123456", binds the second user identifier "123456" with preset face information, and sends the second user identifier "123456" after binding together with the preset face information to the server. The method and the device realize the association operation of the user account number and the preset face information.
It can be appreciated that the server stores the binding relationship in the server after receiving the second user identifier "123456" and the preset face information.
In step 202, the terminal acquires an image to be identified, identifies a target face image in the image to be identified, extracts face feature point information in the target face image, and determines the face feature point information as target face information.
The user can operate the terminal to open the local client, call the camera component to acquire an image to be identified, perform feature scanning on the image to be identified, identify a target face image in the image to be identified, perform image preprocessing such as gray level correction and noise filtering on the target face image, extract face feature point information from the target face image after image preprocessing, and determine the face feature point information as target face information.
For example, as shown in fig. 4, a user opens a local client through the terminal 10, invokes the camera component to collect a current image to be identified, and can also perform a front-end camera or rear-end camera switching operation by clicking the camera switching control 11, based on this, the user can obtain an image to be identified of an acquaintance or a stranger through the camera component on the terminal 10, the terminal 10 can perform feature scanning on the current image to be identified, determine a target face image in the current image, and highlight the target face image with the rectangular frame 12. Further, image preprocessing such as gray correction and noise filtering is performed on the target face image, face characteristic point information is extracted from the target face image after image preprocessing, and the face characteristic point information is determined to be target face information.
In step 203, the terminal transmits the target face information to the server.
The terminal may send a matching request instruction to the server, where the matching request instruction carries the target face information, and is configured to enable the server to obtain a first user identifier (such as an account number) and tag information associated with the first user identifier according to matching of the target face information, and return the first user identifier and tag information associated with the first user identifier to the terminal.
In step 204, after receiving the target face information, the server obtains the first user identifier and the tag information associated with the first user identifier according to the matching of the target face information, and sends the first user identifier and the tag information associated with the first user identifier to the terminal.
After receiving the target face information, the server performs feature point similarity matching according to the face feature point information in the target face information and the face feature point information stored in the server, determines face feature point information with a similarity value exceeding a threshold value with the face feature point information in the target face information, obtains a first user identifier (such as an account number) 1234567 bound with the face feature point information and tag information associated with the first user identifier 1234567 according to a binding relationship, wherein the tag information can comprise the number of tags, display style parameters, display color parameters, display size parameters and display font parameters corresponding to the tags, and sends the first user identifier and tag information associated with the first user identifier to a terminal. Optionally, the server may also send the profile information corresponding to the first user identifier to the terminal.
In step 205, after receiving the first user identifier sent by the server and the tag information associated with the first user identifier, the terminal analyzes the tag information to obtain the number of tags corresponding to the tag information, and obtain the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the tag.
After receiving the first user identifier sent by the server and the tag information associated with the first user identifier, the terminal analyzes the received tag information, obtains the number of tags corresponding to the tag information, and obtains display style parameters, display color parameters, display size parameters and display font parameters corresponding to the tags, wherein the display style parameters are display styles of the tags, such as pattern styles of tag frames, the display color parameters are display color parameters of the tags, such as display colors of the tag frames, the display size parameters are display size parameters of the tags, such as display size of the tag frames, and the display font parameters are display font parameters of fonts in the tags, such as display font types of fonts in the tag frames. In one embodiment, the tag information may be stored in a table format, as shown in table 1.
Table one:
From table 1, it can be seen that the number of labels is 3, the label contents are "goddess", "loving sister", and "how to get up", the display style parameter is "cool sea wind" series, the display color parameter is "blue", the display size parameter is "(10, 5)", wherein the value 10 represents the length of the label, the value 5 represents the width of the label, and the display font parameter is "regular script". It should be noted that the above examples are not limiting to the present invention, and the tag information may be in other formats.
In step 206, the terminal parses the number of tags, and the display style parameter, the display color parameter, the display size parameter, and the display font parameter corresponding to the parsed tags through the augmented reality control, and determines the display position information corresponding to the tags according to the parsed result.
The terminal loads the augmented reality control through a software development kit with an augmented reality function, analyzes the number of the labels, and analyzes the display style parameters, the display color parameters, the display size parameters and the display font parameters corresponding to the labels through the augmented reality control, and predicts the arrangement of the labels according to the analysis result to determine display position information corresponding to the labels, wherein the display position information is used for indicating the display positions of the labels on the image to be identified.
In step 207, the terminal performs initializing loading on the tag according to the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the tag, so as to obtain the target tag.
In step 208, the terminal displays the target tag on the image to be recognized according to the display position information.
The terminal carries out initialization loading on the label according to a display style parameter 'cool sea style', a display color parameter 'blue', a display size parameter '(10, 5)', and a display font parameter 'regular script' corresponding to the label, so as to obtain a target label.
And then, displaying the target labels on the images to be identified in a one-to-one correspondence mode according to the display position information.
For example, as shown in fig. 5, the terminal 10 performs initializing loading on the tag according to the display style parameter "cool sea style", the display color parameter "blue", the display size parameter "(10, 5)", and the display font parameter "regular script" corresponding to the tag, to obtain the target tag 13. The target tags 13 are then displayed on the image to be recognized of the terminal 10 in one-to-one correspondence according to the display position information. Optionally, the personal information 15 "praise, & Gemini", insomnia and vexation "…" corresponding to the first user identifier "1234567" may also be displayed on the upper left corner of the face image.
In step 209, the terminal obtains the received first user identification and obtains a second user identification associated with the local client.
In step 210, the terminal determines whether the first user identifier is consistent with the second user identifier.
When it is determined that the first user identifier is consistent with the second user identifier associated with the local client, it is indicated that the target face image in the image to be identified corresponds to the face image of the user of the terminal, and step 211 is executed. When it is determined that the first user identifier is inconsistent with the second user identifier associated with the local client, it is indicated that the target face image in the image to be identified is not the face image of the user of the terminal, possibly a friend or stranger of the user of the terminal, and step 212 is executed.
For example, the terminal obtains the received first user identifier "1234567" and obtains the second user identifier "123456" associated with the local client, and determines that the first user identifier "1234567" is inconsistent with the second user identifier "123456", and performs step 212.
In step 211, the displayed tag information is set to an editable state.
When the first user identifier is judged to be consistent with the second user identifier associated with the local client, the user is informed to view the label information and the personal information corresponding to the account information of the user, the displayed label information and the personal information can be set to be in an editable state, and the user can delete the label or modify the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the label.
In step 212, a first control is displayed.
When the first user identifier is inconsistent with the second user identifier associated with the local client, the first control can be displayed, and the first control is used for adding the tag information to the client associated with the first user identifier.
For example, as shown in fig. 5 and fig. 6, when the terminal 10 in fig. 5 determines that the first user identifier "1234567" is inconsistent with the second user identifier "123456", a first control 14 "tag" may be generated, when the user clicks the first control 14 "tag", as shown in fig. 6, the terminal 10 may generate a tag selection control 16, and the terminal 10 may randomly generate hot tags "meimeimei", "2333", "nu chinese" for user selection, where the user may click on a "custom" tag, and custom tag keywords, and when the user clicks the hot tag "nu chinese", the terminal 10 may obtain the hot tag "nu chinese" and the target user identifier (first user identifier "1234567"), and the second user identifier "123456" associated with the local client is sent to the server together.
Then, the server finds the storage position of the first user identifier '1234567' in the storage data, adds the label 'Chinese character' to the label information associated with the first user identifier '1234567', and sends first prompt information to the client corresponding to the first user identifier to prompt that the label information is added to the user. And sending a second prompt message to the local client to prompt that the tag is successfully added, and displaying the newly added tag 'female Chinese character' on the image to be identified.
Optionally, after the tag is added successfully, the terminal 10 may determine whether the first user identifier "1234567" exists on the local client, and when the terminal 10 determines that the first user identifier "1234567" exists on the local client, it indicates that the first user identifier "1234567" is a client friend of the second user identifier "123456", and a second control may be generated, where the second control is used to send a message to the client associated with the first user identifier "1234567". When the user clicks on the second control, the terminal 10 automatically jumps to the dialog box with the first user identification "1234567".
As shown in fig. 7, when the terminal 10 determines that the first user identifier "1234567" does not exist on the local client, which means that the first user identifier "1234567" is not a client friend of the second user identifier "123456", a third control 17 may be generated, where the third control is used to send a request for adding friends to the client associated with the first user identifier "1234567".
According to the method, when a user of the local client encounters the first user in a real scene, the local client can be started, the face information scanning function is used, the image to be recognized of the first user is obtained through the camera component, the face information of the first user is extracted, the face information of the first user is used for obtaining the first user identification and the label information of the first user from the server, the obtained label information is displayed on the upper layer of the image to be recognized, interesting interaction can be carried out, and compared with the existing scheme that information interaction can only be carried out according to a network relationship, convenience of user operation can be greatly improved, and flexibility and diversity of information processing are improved.
Third embodiment,
The method described in embodiment two is described in further detail below by way of example.
Referring to fig. 8, fig. 8 is a timing diagram of an information processing method according to an embodiment of the invention. The method flow may include:
In step S1, the terminal acquires preset face information, acquires a second user identifier associated with the local client, and binds the preset face information with the second user identifier.
For example, a user opens a local client through a terminal, invokes a camera component to collect a current image, and the terminal performs feature scanning on the current image to determine a preset face image in the current image. Further, image preprocessing such as gray correction and noise filtering is performed on the preset face image, face characteristic point information is extracted from the preset face image after image preprocessing, and the face characteristic point information is determined to be preset face information. The terminal obtains account information logged in on the local client, such as a second user identifier (account number) "123456", and binds the second user identifier (account number) "123456" with preset face information.
In step S2, the terminal sends the bound preset face information and the second user identifier to the server.
For example, the terminal sends the second user identifier (account) "123456" after binding together with the preset face information to the server. The method and the device realize the association operation of the user account number and the preset face information.
In step S3, the server stores the bound preset face information and the second user identifier.
For example, after receiving the second user identifier (account) "123456" and the preset face, the server stores the binding relationship in the storage space of the server.
In step S4, the terminal acquires an image to be identified, identifies a target face image in the image to be identified, extracts face feature point information in the target face image, and determines the face feature point information as target face information.
The user opens the local client through the terminal, invokes the camera component to acquire the current image to be identified, based on the current image to be identified, the user can acquire the image to be identified of an acquaintance or a stranger through the camera component on the terminal, and the terminal can perform characteristic scanning on the current image to be identified to determine a target face image in the current image. Further, image preprocessing such as gray correction and noise filtering is performed on the target face image, face characteristic point information is extracted from the target face image after image preprocessing, and the face characteristic point information is determined to be target face information.
In step S5, the terminal transmits the target face information to the server.
In step S6, the server obtains a first user identifier and tag information associated with the first user identifier according to the matching of the target face information.
For example, after receiving the target face information, the server performs feature point similarity matching according to the face feature point information in the target face information and the face feature point information stored in the server, determines the face feature point information with the face feature point information similarity value exceeding a threshold value in the target face information, and obtains a first user identifier (such as an account number) "1234567" bound to the face feature point information and tag information associated with the first user identifier "1234567" according to a binding relationship, where the tag information may include the number of tags, and display style parameters, display color parameters, display size parameters, and display font parameters corresponding to the tags.
In step S7, the server transmits the first user identification and tag information associated with the first user identification to the terminal.
For example, the server transmits the first user identification "123457" and tag information associated with the first user identification to the terminal.
In step S8, the terminal parses the tag information, and displays the parsed tag information on the image to be recognized.
For example, the terminal first analyzes the pattern of the image layer of the image to be recognized, and determines the display layer corresponding to the face image. And secondly, the parsed label information is floated on a display layer corresponding to the face image in an augmented reality mode.
Fourth embodiment,
The method according to embodiment three will be described in further detail below by way of example.
Referring to fig. 9, fig. 9 is another flow chart of the information processing method according to the embodiment of the invention.
It should be noted that, the server in the method flow may be a server cluster formed by a plurality of servers, where the server cluster may include: face recognition server and label server. The face recognition server and the tag server can be in communication connection, the face recognition server and the tag server can be in communication connection with the terminal respectively, and the communication connection can be established based on a wired network or a wireless network.
The face recognition server is mainly used for recognizing target face information, for example, a face database is stored in the face recognition server and is used for storing the corresponding relation between the first user identification and the face information.
The tag server is mainly used for acquiring tag information of the first user identification. For example, a tag database is stored in the tag server, and the tag database is used for storing a correspondence between the first user identifier and the tag information.
Specifically, the method flow may include:
in step S101, the client acquires an image to be identified through the camera, and extracts target face information from the image to be identified.
The camera component is called by the client to collect the image to be recognized, the target face image in the image to be recognized is recognized, the face characteristic point information in the target face image is extracted, and the face characteristic point information is determined to be the target face information.
In step S102, the client detects whether the target face information meets a preset recognition condition.
The client detects whether the face feature point information in the target face information has a preset number of face feature points, and when the client detects that the face feature point information has the preset number of face feature points, the client determines that the target face information detected by the client meets a preset recognition condition, and step S103 is executed. When the client detects that the face feature point information does not have the face feature points with the preset number, the client determines that the target face information detected by the client does not meet the preset recognition condition, and returns to the step S101.
In step S103, the client detects whether there is cache information of the target face information.
When the client detects that the target face information accords with the preset recognition condition, the client detects whether cache information corresponding to the target face information exists on the local side of the terminal, wherein the cache information can comprise an association relation between the target face information and a corresponding first user identifier. When the client detects the cache information of the target face information, step S04 is executed. When the client detects that there is no cache information of the target face information, step S105 is performed.
In step S104, the client sends the first user identification to the face recognition server.
When the client detects the cache information with the target face information, the client can directly send the first user identification corresponding to the target face information in the cache information to the face recognition server. The face recognition server can directly acquire the first user identification, so that a matching process is omitted, and resources of the server are saved.
In step S105, the client transmits the target face information to the face recognition server.
When the client detects that the cache information of the target face information does not exist, the client sends the target face information to the face recognition server, so that the face recognition server can match the corresponding first user identification according to the target face information.
In step S106, the face recognition server matches the face database according to the target face information.
When the face recognition server receives the target face information sent by the client, the face recognition server matches the face database according to the face feature point information in the target face information, and the face information which is stored in the face database and is correspondingly matched with the face feature point in the target face information is searched.
In step S107, the face recognition server acquires a corresponding first user identification.
The face recognition server obtains a first user identification corresponding to the face information according to the corresponding relation between the first user identification stored in the face database and the face information. Or directly acquiring the first user identification sent by the client.
In step S108, the face recognition server transmits the first user identification to the tag server.
After the face recognition server acquires the first user identification, the face recognition server sends the first user identification to the tag server.
In step S109, the tag server matches the tag database according to the first user identifier, and obtains corresponding tag information.
After receiving the first user identifier, the tag server matches a tag database according to the first user identifier, and the tag server can acquire tag information corresponding to the first user identifier because the tag database stores the corresponding relation between the first user identifier and the tag information.
In step S110, the tag server transmits tag information to the client.
In step S111, the client parses the tag information, and displays the parsed tag information on the image to be recognized.
After receiving the tag information, the client can load the augmented reality control and analyze the tag information through the augmented reality control. And displaying the parsed label information on the image to be identified in an augmented reality mode through an augmented reality control.
In step S112, the client detects whether tag information is added.
When the client side realizes the tag information on the image to be identified, the user can check the tag information and can add the tag to the tag information. When the client detects addition of tag information, step S113 is performed. When the client does not detect the addition of the tag information, no operation is performed.
In step S113, the client acquires a tag added by the user.
When the client detects that the tag information is added, the client acquires the tag added by the user.
In step S114, the client transmits the tag added by the user to the tag server.
The client sends the label added by the user to the label server so that the label server performs updating operation.
In step S115, the tag server updates the tag into the tag database and notifies the client corresponding to the first user identifier.
After receiving the label added by the user, the label server updates the label into label information corresponding to the first user identification, and simultaneously sends a notification message to the client corresponding to the first user identification, so that other users are informed of carrying out label adding operation on the client corresponding to the first user label.
In step S116, the tag server obtains updated tag information corresponding to the first user identifier.
After the tag server updates the tag to the tag information corresponding to the first user identifier, the tag server correspondingly acquires the updated tag information corresponding to the first user identifier.
In step S117, the tag server transmits the updated tag information to the client.
The label server sends the updated label information to the client so that the latest label information is synchronously displayed on the client.
In step S118, the client parses the updated tag information, and displays the parsed tag information on the image to be recognized.
The client can load the augmented reality control after receiving the updated label information, and re-analyze the updated label information through the augmented reality control. And displaying the parsed label information on the image to be identified in an augmented reality mode through an augmented reality control so as to realize synchronous display operation.
Fifth embodiment (V),
In order to facilitate better implementation of the information processing method provided by the embodiment of the invention, the embodiment of the invention also provides a device based on the information processing method. Where the meaning of a noun is the same as in the information processing method described above, specific implementation details may be referred to the description in the method embodiment.
Referring to fig. 10a, fig. 10a is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention, wherein the information processing apparatus may include an extracting unit 301, a transceiver unit 302, an analyzing unit 303, and a display unit 304.
The extracting unit 301 is configured to obtain an image to be identified, and extract target face information from the image to be identified.
The extracting unit 301 may obtain the image to be identified by opening a certain client on the terminal, such as an instant messaging client, to input a user identifier and a password, where the client may enter a display main interface corresponding to the user identifier, and the display main interface is a first interface displayed after login through the user identifier and the password. The main interface comprises a shortcut operation control, the shortcut operation control is a shortcut entry for triggering and acquiring an image to be identified, and when the shortcut operation control is detected to be clicked by a user, the camera component is called to acquire the image to be identified, and the acquired image to be identified is displayed on the display screen. Optionally, the image to be identified may further include a camera switching control and an album control, where the camera switching control is a shortcut entry for switching a front camera or a rear camera, and specifically, the user may obtain the image to be identified by clicking the camera switching control to switch the front camera or the rear camera. The album control is a shortcut entry for calling the album on the terminal, and concretely, a user can click the album control to call the album on the terminal, so that a certain picture in the album is selected as an image to be identified.
In some embodiments, as shown in fig. 10b, the extraction unit 301 may include:
the interception determining subunit 3011 is configured to obtain an image to be identified, identify a target face image in the image to be identified, intercept the target face image, and determine the target face image as target face information; or (b)
The extraction determining unit 3012 is configured to obtain an image to be identified, identify a target face image in the image to be identified, extract face feature point information in the target face image, and determine the face feature point information as target face information.
The pattern features contained in the face image are rich in ten directions, such as histogram features, color features, template features, structural features, haar features and the like. The interception determination subunit 3011 or the extraction determination unit 3012 may perform feature scanning on the image to be recognized, so as to determine a target face image in the image to be recognized. Alternatively, the target face image may be highlighted with a rectangular frame on the image to be recognized, or may be highlighted with a circular annular frame on the image to be recognized.
In some embodiments, the operation of acquiring the image to be identified in the interception determining subunit 3011 or the extraction determining unit 3012, and identifying the target face image in the image to be identified may be specifically used:
analyzing an image to be identified, and determining a face image on the image to be identified;
judging whether the number of the face images is a plurality of;
when the number of the face images is judged to be a plurality of, generating prompt information, wherein the prompt information is used for prompting a user to select a target face image and receiving the target face image selected by the user according to the prompt information;
when the number of face images is judged not to be plural, the face image is determined as a target face image.
In an embodiment, the interception determining subunit 3011 may intercept the target face image, and use the target face image as the target face information. In another embodiment, the extraction determination unit 3012 may perform image preprocessing such as gray-scale correction and noise filtering on the target face image, and extract face feature point information from the processed target face image. The face feature points may include geometric descriptions of local constituent points such as eyes, nose, mouth, chin, and the like, and face feature point information is determined as target face information.
The transceiver unit 302 is configured to send the target face information to the server, and receive the first user identifier obtained by the server according to the matching of the target face information, and tag information associated with the first user identifier.
After extracting the target face information, the extracting unit 301 sends the target face information to the server, and when the server receives the target face information, the receiving and sending unit 302 matches the target face information according to the target face information to obtain a first user identifier (such as an account number) bound corresponding to the target face information and tag information associated with the first user identifier, and returns the first user identifier and tag information associated with the first user identifier to the receiving and sending unit 302.
And a parsing unit 303, configured to parse the tag information.
In some embodiments, as shown in fig. 10c, the parsing unit 303 may include:
An analysis subunit 3031, configured to analyze the tag information and determine the number of tags and style configuration information corresponding to the tags;
The parsing subunit 3032 is configured to parse the number of labels and the style configuration information corresponding to the labels through the augmented reality control, and determine display position information corresponding to the labels.
The analyzing subunit 3031 analyzes the tag information, and the analyzing subunit 3032 determines the number of tags corresponding to the tag information and style configuration information corresponding to each tag, where the style configuration information is parameter information when the tag is displayed, such as parameter information of a tag display size, a font, a display style, and the like.
In some embodiments, the analysis subunit 3031 may be specifically configured to: the label information is analyzed to obtain the number of labels corresponding to the label information, and obtain display style parameters, display color parameters, display size parameters, display font parameters and the like corresponding to the labels.
In some embodiments, the parsing subunit 3032 may be specifically configured to: analyzing the number of the labels through the augmented reality control, analyzing the display style parameters, the display color parameters, the display size parameters and the display font parameters corresponding to the labels, and determining the display position information corresponding to the labels according to the analysis result.
Since the tag is to be displayed on the image to be recognized at last, the parsing subunit 3032 needs to schedule the display positions of the tags. Further, the parsing subunit 3032 parses the number of tags, and the display style parameter, the display color parameter, the display size parameter, and the display font parameter corresponding to the parsed tags through the augmented reality control, and determines the display position information corresponding to the tags according to the parsing result. The more the number of labels, the denser the label display positions, the fewer the number of labels, and the looser the label display positions.
And a display unit 304, configured to display the parsed tag information on the image to be identified.
The display unit 304 displays the parsed tag information on the image to be recognized in an augmented reality mode through the augmented reality control, that is, the display unit 304 first analyzes the pattern of the image to be recognized to determine the display layer corresponding to the face image. And secondly, the parsed label information is floated on a display layer corresponding to the face image in an augmented reality mode.
In some embodiments, the display unit 304 may be specifically configured to: initializing and loading the label according to the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the label to obtain a target label, and displaying the target label on the image to be identified according to the display position information.
The display unit 304 performs initializing loading on the label according to the pattern style of the label frame corresponding to the label, the display color of the label frame, the display size of the label frame, and the display font type of the font in the label frame, so as to obtain a target label corresponding to the first user identifier. Then, the target tag is floated on the image to be recognized in an augmented reality form according to the display position information.
In some embodiments, as shown in fig. 10d, the information processing apparatus may further include a face information acquisition unit 305, an acquisition unit 306, a binding transmission unit 307, an identification acquisition unit 308, a first judgment unit 309, an editing unit 310, a first control display unit 311, a second judgment unit 312, a second control generation unit 313, and a third control generation unit 314.
The face information acquisition unit 305 is configured to acquire preset face information.
An obtaining unit 306, configured to obtain a second user identifier associated with the local client.
The binding sending unit 307 is configured to bind the preset face information with the second user identifier, and send the bound preset face information with the second user identifier to the server.
Specifically, the user may perform the pre-binding operation by opening the local client on the terminal, that is, select the preset image through the face information acquisition unit 305, and extract the face information to obtain the preset face information, which needs to be specifically described, where the preset face information may be the face information of the user.
Then, the acquiring unit 306 acquires a second user identifier associated with the local client, that is, account information of the current login on the local client (that is, account information of the user himself), binds the second user identifier with the preset face information, and the binding transmitting unit 307 packages and transmits the bound preset face information and the second user identifier to the server, so that the server stores a binding relationship between the preset face information and the second user identifier.
The identifier obtaining unit 308 is configured to obtain the received first user identifier and obtain a second user identifier associated with the local client.
A first determining unit 309 is configured to determine whether the first user identifier is consistent with the second user identifier.
And an editing unit 310 for setting the displayed tag information to an editable state when it is determined that the first user identification is identical to the second user identification.
The first control display unit 311 is configured to display a first control when it is determined that the first user identifier is inconsistent with the second user identifier, where the first control is used to add tag information to a client associated with the first user identifier.
After the display unit 304 realises the parsed tag information on the image to be identified in an augmented reality mode, the user can quickly and primarily learn about the first user through the displayed tag, so that the interaction efficiency of the user is improved.
Then, the identifier obtaining unit 308 may obtain the received first user identifier and obtain a second user identifier associated with (logged on) the local client, where the first judging unit 309 judges whether the first user identifier is consistent with the second user identifier, and when the editing unit 310 judges that the first user identifier is consistent with the second user identifier, it indicates that the current user is viewing the tag information corresponding to the own account information, and sets the displayed tag information to an editable state, that is, the user may delete the tag, or modify the display style parameter, the display color parameter, the display size parameter, and the display font parameter corresponding to the tag.
Further, when the first control display unit 311 determines that the first user identifier is inconsistent with the second user identifier, it indicates that the current user is viewing the tag information corresponding to the account information of the other user. And displaying a first control, wherein the first control is displayed at the bottom end of the image to be identified, a word of labeling can be displayed on the first control, when a user clicks the first control, a label selection option is popped up, and when the user selects a corresponding label, the selected label can be added to a client associated with the first user identification.
A second determining unit 312 is configured to determine whether the first user identifier exists on the local client.
And the second control generating unit 313 is configured to generate a second control when it is determined that the first user identifier exists on the local client, where the second control is used to send a message to the client associated with the first user identifier.
And the third control generating unit 314 is configured to generate a third control when it is determined that the first user identifier does not exist on the local client, where the third control is used to send a friend adding request to the client associated with the first user identifier.
When the second control generating unit 313 determines that the first user identifier exists on the local client, it indicates that the first user identifier and the second user identifier associated with the local client are in a friend relationship, generates a second control, and the user may send a message to the client corresponding to the first user identifier by clicking the second control. When the third control generating unit 314 determines that the second user identifier does not exist on the local client, it indicates that the first user identifier and the second user identifier associated with the local client are not in a friend relationship, a third control is generated, and the user can send a friend adding request to the client corresponding to the first user identifier by clicking the third control.
The specific implementation of each unit can be referred to the previous embodiments, and will not be repeated here.
As can be seen from the above, in the embodiment of the present invention, the extraction unit 301 obtains the image to be identified, and extracts the target face information from the image to be identified; the receiving and transmitting unit 302 sends the target face information to the server, and receives the first user identifier obtained by the server according to the matching of the target face information and the tag information associated with the first user identifier; the parsing unit 303 parses the tag information; the display unit 304 displays the parsed tag information on the image to be recognized. The method and the device can rapidly identify the target face information on the image, automatically acquire the label information of the user associated with the target face information according to the target face information, and display the label information on the image to be identified, so that compared with the existing scheme which can only rely on the network relationship for information interaction, the method and the device can greatly improve the convenience of user operation and the flexibility and diversity of information processing.
Embodiment six,
Accordingly, referring to fig. 11, an embodiment of the present invention further provides an information processing system, which includes an information processing device and a server, the information processing device may be integrated in a terminal, and the information processing device is any one of the information processing devices provided in the embodiment of the present invention, and specifically, refer to embodiment five. For example, taking the example that the information processing apparatus is specifically integrated in a terminal, then:
The terminal is used for acquiring an image to be identified, extracting target face information from the image to be identified, sending the target face information to the server, receiving a first user identifier obtained by the server according to the matching of the target face information and tag information associated with the first user identifier, analyzing the tag information, and displaying the analyzed tag information on the image to be identified.
The server is used for receiving target face information sent by the terminal, matching the target face information to obtain a first user identifier corresponding to the target face information and tag information associated with the first user identifier, and sending the first user identifier and the tag information associated with the first user identifier to the terminal.
For example, after receiving the target face information, the server performs feature point similarity matching according to the face feature point information in the target face information and the face feature point information stored in the server, determines face feature point information with a similarity value exceeding a threshold value with the face feature point information in the target face information, obtains a first user identifier (such as an account number) bound with the face feature point information and tag information associated with the first user identifier according to a binding relationship, where the tag information may include the number of tags, a display style parameter, a display color parameter, a display size parameter and a display font parameter corresponding to the tags, and sends the first user identifier and tag information associated with the first user identifier to a terminal. Optionally, the server may also send the profile information corresponding to the first user identifier to the terminal.
In some embodiments, the server may also be configured to: and receiving the bound preset face information and the second user identifier sent by the terminal.
After receiving the bound second user identification and the preset face information, the server stores the binding relation in a specific storage space of the server.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Since the information processing system may include any information processing apparatus provided by the embodiment of the present invention, the beneficial effects that any information processing apparatus provided by the embodiment of the present invention can achieve are described in detail in the previous embodiments, and are not described herein.
Embodiment seven,
Embodiments of the present invention also provide a terminal, as shown in fig. 12, which may include a Radio Frequency (RF) circuit 601, a memory 602 including one or more computer readable storage media, an input unit 603, a display unit 604, a sensor 605, an audio circuit 606, a wireless fidelity (WiFi, wireless Fidelity) module 607, a processor 608 including one or more processing cores, and a power supply 609. It will be appreciated by those skilled in the art that the terminal structure shown in fig. 12 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
The RF circuit 601 may be used for receiving and transmitting signals during a message or a call, and in particular, after receiving downlink information of a base station, the downlink information is processed by one or more processors 608; in addition, data relating to uplink is transmitted to the base station. Typically, RF circuitry 601 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a subscriber identity module (SIM, subscriber Identity Module) card, a transceiver, a coupler, a low noise amplifier (LNA, low Noise Amplifier), a duplexer, and the like. In addition, the RF circuitry 601 may also communicate with networks and other devices through wireless communications. The wireless communication may use any communication standard or protocol including, but not limited to, global system for mobile communications (GSM, global System of Mobile communication), universal packet Radio Service (GPRS, general Packet Radio Service), code division multiple access (CDMA, code Division Multiple Access), wideband code division multiple access (WCDMA, wideband Code Division Multiple Access), long term evolution (LTE, long Term Evolution), email, short message Service (SMS, short MESSAGING SERVICE), and the like.
The memory 602 may be used to store software programs and modules, and the processor 608 may execute various functional applications and information processing by executing the software programs and modules stored in the memory 602. The memory 602 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the terminal, etc. In addition, the memory 602 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 602 may also include a memory controller to provide access to the memory 602 by the processor 608 and the input unit 603.
The input unit 603 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in one particular embodiment, the input unit 603 may include a touch-sensitive surface, as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations thereon or thereabout by a user (e.g., operations thereon or thereabout by a user using any suitable object or accessory such as a finger, stylus, etc.), and actuate the corresponding connection means according to a predetermined program. Alternatively, the touch-sensitive surface may comprise two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 608, and can receive commands from the processor 608 and execute them. In addition, touch sensitive surfaces may be implemented in a variety of types, such as resistive, capacitive, infrared, and surface acoustic waves. The input unit 603 may comprise other input devices in addition to a touch sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 604 may be used to display information input by a user or information provided to the user and various graphical user interfaces of the terminal, which may be composed of graphics, text, icons, video and any combination thereof. The display unit 604 may include a display panel, which may optionally be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay a display panel, and upon detection of a touch operation thereon or thereabout, the touch-sensitive surface is passed to the processor 608 to determine the type of touch event, and the processor 608 then provides a corresponding visual output on the display panel based on the type of touch event. Although in fig. 12 the touch sensitive surface and the display panel are implemented as two separate components for input and output functions, in some embodiments the touch sensitive surface may be integrated with the display panel to implement the input and output functions.
The terminal may also include at least one sensor 605, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or backlight when the terminal moves to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and the direction when the mobile phone is stationary, and can be used for applications of recognizing the gesture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured in the terminal are not described in detail herein.
Audio circuitry 606, speakers, and a microphone may provide an audio interface between the user and the terminal. The audio circuit 606 may transmit the received electrical signal after audio data conversion to a speaker, where the electrical signal is converted to a sound signal for output; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 606 and converted into audio data, which are processed by the audio data output processor 608 for transmission to, for example, another terminal via the RF circuit 601, or which are output to the memory 602 for further processing. The audio circuit 606 may also include an ear bud jack to provide communication of the peripheral ear bud with the terminal.
The WiFi belongs to a short-distance wireless transmission technology, and the terminal can help the user to send and receive e-mail, browse web pages, access streaming media and the like through the WiFi module 607, so that wireless broadband internet access is provided for the user. Although fig. 12 shows a WiFi module 607, it is understood that it does not belong to the essential constitution of the terminal, and can be omitted entirely as required within the scope of not changing the essence of the invention.
The processor 608 is a control center of the terminal, and connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the terminal and processes data by running or executing software programs and/or modules stored in the memory 602, and calling data stored in the memory 602, thereby overall controlling the mobile phone. Optionally, the processor 608 may include one or more processing cores; preferably, the processor 608 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 608.
The terminal also includes a power supply 609 (e.g., a battery) for powering the various components, which may be logically connected to the processor 608 via a power management system so as to provide for managing charging, discharging, and power consumption by the power management system. The power supply 609 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the terminal may further include a camera, a bluetooth module, etc., which will not be described herein. Specifically, in this embodiment, the processor 608 in the terminal loads executable files corresponding to the processes of one or more application programs into the memory 602 according to the following instructions, and the processor 608 executes the application programs stored in the memory 602, so as to implement various functions:
Acquiring an image to be identified, and extracting target face information from the image to be identified; the method comprises the steps of sending target face information to a server, and receiving a first user identifier obtained by the server according to matching of the target face information and tag information associated with the first user identifier; analyzing the label information; and displaying the parsed label information on the image to be recognized.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the portions of an embodiment that are not described in detail in the foregoing embodiments may be referred to in the foregoing detailed description of the information processing method, which is not repeated herein.
From the above, the terminal according to the embodiment of the invention can extract the target face information from the image to be identified by acquiring the image to be identified; the method comprises the steps of sending target face information to a server, and receiving a first user identifier obtained by the server according to matching of the target face information and tag information associated with the first user identifier; analyzing the label information; and displaying the parsed label information on the image to be recognized. The scheme can rapidly identify the target face information on the image, automatically acquire the label information of the user associated with the target face information according to the target face information, and display the label information on the image to be identified, so that compared with the existing scheme which can only rely on the network relationship for information interaction, the scheme can greatly improve the convenience of user operation and the flexibility and diversity of information processing.
Example eight,
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present invention provides a storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform steps in any one of the information processing methods provided by the embodiment of the present invention. For example, the instructions may perform the steps of:
Acquiring an image to be identified, and extracting target face information from the image to be identified; the method comprises the steps of sending target face information to a server, and receiving a first user identifier obtained by the server according to matching of the target face information and tag information associated with the first user identifier; analyzing the label information; and displaying the parsed label information on the image to be recognized.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The instructions stored in the storage medium may perform steps in any information processing method provided by the embodiments of the present invention, so that the beneficial effects that any information processing method provided by the embodiments of the present invention can be achieved, which are detailed in the previous embodiments and are not described herein.
The foregoing describes in detail an information processing method, apparatus, storage medium and terminal provided in the embodiments of the present invention, and specific examples are applied to illustrate the principles and embodiments of the present invention, where the foregoing examples are only used to help understand the method and core idea of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present invention, the present description should not be construed as limiting the present invention.

Claims (12)

1. An information processing method, which is applied to an instant messaging client, the method comprising:
Responding to clicking operation of a shortcut operation control in a display main interface, calling a camera acquisition component to acquire an image to be identified, and displaying the acquired image to be identified on a display screen; the main interface is a first interface displayed after logging in the instant messaging client through a second user identifier and a password; the image to be identified further comprises a camera switching control and an album control, wherein the camera switching control is used for switching and collecting cameras called by the head portraits to be identified, and the album control is used for calling an album on a terminal where the instant messaging client is located so as to select the image to be identified from the album;
analyzing the image to be identified, and determining a face image on the image to be identified;
Judging whether the number of the face images in the image to be identified is multiple;
when the number of the face images in the image to be recognized is not multiple, determining the face images as target face images;
when the number of the face images in the images to be recognized is multiple, generating popup prompt information, wherein the popup prompt information is used for prompting a user to select a target face image and receiving the target face image selected by the user according to the popup prompt information;
determining target face information based on the target face image;
The target face information is sent to a server, and a first user identifier obtained by the server according to the target face information in a matching mode and tag information associated with the first user identifier are received;
Analyzing the label information;
displaying the parsed label information on the image to be identified;
Acquiring a received first user identification and acquiring the second user identification associated with a local client;
When the first user identification is consistent with the second user identification, displaying the displayed tag information in an editable state to indicate to execute a deleting operation on the tag information or execute a modifying operation on the style configuration of the tag information;
When the first user identifier is inconsistent with the second user identifier, displaying a first control, wherein the first control is used for adding tag information to a client associated with the first user identifier; when detecting the label adding operation based on the first control, the instant messaging client sends the acquired label information to a server, and the updated label information associated with the first user identification sent by the server is analyzed through an enhanced display control so as to update the label information displayed on the image to be identified;
When the first user identifier exists on the local client, displaying a second control, wherein the second control is used for sending a message to the client associated with the first user identifier; the first user identifier exists on the local client side to represent that a friend relationship exists between the first user identifier and the second user identifier; the absence of the first user identifier on the local client indicates that the first user identifier and the second user identifier are not in a friend relationship;
And when the first user identifier does not exist on the local client, displaying a third control, wherein the third control is used for sending a friend adding request to the client associated with the first user identifier, and the first control and the third control are displayed at the bottom end of the image to be identified.
2. The method of processing according to claim 1, wherein the step of parsing the tag information includes:
Analyzing the label information to determine the number of labels and the style configuration information corresponding to the labels;
and analyzing the number of the labels and the style configuration information corresponding to the labels through the augmented reality control, and determining the display position information corresponding to the labels.
3. The processing method according to claim 2, wherein the step of analyzing the tag information to determine the number of tags and style configuration information corresponding to the tags includes:
analyzing the label information to obtain the number of labels corresponding to the label information, and obtaining display style parameters, display color parameters, display size parameters and display font parameters corresponding to the labels;
The step of analyzing the number of the labels and the style configuration information corresponding to the labels through the augmented reality control to determine the display position information corresponding to the labels comprises the following steps: analyzing the number of the labels, and analyzing display style parameters, display color parameters, display size parameters and display font parameters corresponding to the labels through the augmented reality control, and determining display position information corresponding to the labels according to analysis results.
4. A processing method according to claim 3, wherein the step of displaying the parsed tag information on the image to be recognized comprises:
initializing and loading the label according to the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the label to obtain a target label;
And displaying the target label on the image to be identified according to the display position information.
5. The processing method according to any one of claims 1 to 4, wherein the determining target face information based on the target face image includes:
Intercepting the target face image and determining the target face image as target face information; or (b)
And extracting face characteristic point information in the target face image, and determining the face characteristic point information as target face information.
6. A process according to any one of claims 1 to 4, further comprising:
Acquiring preset face information;
Acquiring a second user identifier associated with a local client;
binding the preset face information with the second user identifier, and sending the bound preset face information and the second user identifier to a server.
7. An information processing apparatus, characterized by comprising:
The extraction unit is used for responding to clicking operation of a shortcut operation control in the display main interface, calling the camera acquisition component to acquire an image to be identified, and displaying the acquired image to be identified on the display screen; the main interface is a first interface displayed after logging in the instant messaging client through the second user identifier and the password; the image to be identified further comprises a camera switching control and an album control, wherein the camera switching control is used for switching and collecting cameras called by the head portraits to be identified, and the album control is used for calling an album on a terminal where the instant messaging client is located so as to select the image to be identified from the album; analyzing the image to be identified, and determining a face image on the image to be identified; judging whether the number of the face images in the image to be identified is multiple; when the number of the face images in the image to be recognized is not multiple, determining the face images as target face images; when the number of the face images in the images to be recognized is multiple, generating popup prompt information, wherein the popup prompt information is used for prompting a user to select a target face image and receiving the target face image selected by the user according to the popup prompt information; determining target face information based on the target face image;
The receiving and transmitting unit is used for sending the target face information to a server and receiving a first user identifier obtained by the server according to the matching of the target face information and tag information associated with the first user identifier;
the analysis unit is used for analyzing the label information;
the display unit is used for displaying the parsed label information on the image to be identified;
the identification acquisition unit is used for acquiring the received first user identification and acquiring a second user identification associated with the local client;
An editing unit configured to display displayed tag information in an editable state when the first user identifier is identical to the second user identifier, to instruct to perform a deletion operation on the tag information or to perform a modification operation on a style configuration of the tag information;
The first control display unit is used for displaying a first control when the first user identifier is inconsistent with the second user identifier, and the first control is used for adding label information to a client associated with the first user identifier; when detecting the label adding operation based on the first control, the instant messaging client sends the acquired label information to a server, and the updated label information associated with the first user identification sent by the server is analyzed through an enhanced display control so as to update the label information displayed on the image to be identified;
The control generation unit is used for displaying a second control when the first user identifier exists on the local client, and the second control is used for sending a message to the client associated with the first user identifier; when the first user identifier does not exist on the local client, displaying a third control, wherein the third control is used for sending a friend adding request to the client associated with the first user identifier, and the first control and the third control are displayed at the bottom end of the image to be identified; the first user identifier exists on the local client side to represent that a friend relationship exists between the first user identifier and the second user identifier; the absence of the first user identification on the local client indicates that the first user identification and the second user identification are not in a friend relationship.
8. The information processing apparatus according to claim 7, wherein the parsing unit includes:
The analysis subunit is used for analyzing the label information and determining the number of labels and the style configuration information corresponding to the labels;
And the analysis subunit is used for analyzing the number of the labels and the style configuration information corresponding to the labels through the augmented reality control and determining the display position information corresponding to the labels.
9. The information processing apparatus according to claim 8, wherein the analysis subunit is specifically configured to:
analyzing the label information to obtain the number of labels corresponding to the label information, and obtaining display style parameters, display color parameters, display size parameters and display font parameters corresponding to the labels;
The analysis subunit is specifically configured to: analyzing the number of the labels, and analyzing display style parameters, display color parameters, display size parameters and display font parameters corresponding to the labels through the augmented reality control, and determining display position information corresponding to the labels according to analysis results.
10. The information processing apparatus according to claim 9, wherein the display unit is specifically configured to:
Initializing and loading the label according to the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the label to obtain a target label;
And displaying the target label on the image to be identified according to the display position information.
11. A storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the information processing method of any one of claims 1 to 6.
12. An information processing system, the system comprising: a terminal and a server;
The terminal comprising the information processing apparatus according to any one of claims 7 to 10;
The server is used for: and receiving target face information sent by a terminal, matching according to the target face information to obtain a first user identifier corresponding to the target face information and tag information associated with the first user identifier, and sending the first user identifier and the tag information associated with the first user identifier to the terminal.
CN201810272957.XA 2018-03-29 2018-03-29 Information processing method, device, storage medium and system Active CN110555171B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810272957.XA CN110555171B (en) 2018-03-29 2018-03-29 Information processing method, device, storage medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810272957.XA CN110555171B (en) 2018-03-29 2018-03-29 Information processing method, device, storage medium and system

Publications (2)

Publication Number Publication Date
CN110555171A CN110555171A (en) 2019-12-10
CN110555171B true CN110555171B (en) 2024-04-30

Family

ID=68733667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810272957.XA Active CN110555171B (en) 2018-03-29 2018-03-29 Information processing method, device, storage medium and system

Country Status (1)

Country Link
CN (1) CN110555171B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111177499B (en) * 2019-12-27 2024-02-09 腾讯科技(深圳)有限公司 Label adding method and device and computer readable storage medium
CN111243023B (en) * 2020-01-14 2024-03-29 上海联影医疗科技股份有限公司 Quality control method and device based on virtual intelligent medical platform
CN112199553A (en) * 2020-09-24 2021-01-08 北京达佳互联信息技术有限公司 Information resource processing method, device, equipment and storage medium
CN112365281B (en) * 2020-10-28 2024-05-14 国网冀北电力有限公司计量中心 Power customer service demand analysis method and device
CN113038266B (en) * 2021-03-05 2023-02-24 青岛智动精工电子有限公司 Image processing method and device and electronic equipment
TWI810104B (en) * 2022-11-01 2023-07-21 南開科技大學 Interactive digital photo frame system with communication function and method thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102355534A (en) * 2011-11-01 2012-02-15 宇龙计算机通信科技(深圳)有限公司 Mobile terminal and contact information recommendation method
CN103076879A (en) * 2012-12-28 2013-05-01 中兴通讯股份有限公司 Multimedia interaction method and device based on face information, and terminal
CN103513890A (en) * 2012-06-28 2014-01-15 腾讯科技(深圳)有限公司 Method and device for interaction based on image and server
CN106484737A (en) * 2015-09-01 2017-03-08 腾讯科技(深圳)有限公司 A kind of network social intercourse method and network social intercourse device
CN106559317A (en) * 2015-09-30 2017-04-05 北京奇虎科技有限公司 A kind of method and apparatus that account information is sent based on instant messaging
CN107239725A (en) * 2016-03-29 2017-10-10 阿里巴巴集团控股有限公司 A kind of information displaying method, apparatus and system
CN107704626A (en) * 2017-10-30 2018-02-16 北京萌哥玛丽科技有限公司 A kind of control method and control device that user is searched based on recognition of face

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102355534A (en) * 2011-11-01 2012-02-15 宇龙计算机通信科技(深圳)有限公司 Mobile terminal and contact information recommendation method
CN103513890A (en) * 2012-06-28 2014-01-15 腾讯科技(深圳)有限公司 Method and device for interaction based on image and server
CN103076879A (en) * 2012-12-28 2013-05-01 中兴通讯股份有限公司 Multimedia interaction method and device based on face information, and terminal
CN106484737A (en) * 2015-09-01 2017-03-08 腾讯科技(深圳)有限公司 A kind of network social intercourse method and network social intercourse device
CN106559317A (en) * 2015-09-30 2017-04-05 北京奇虎科技有限公司 A kind of method and apparatus that account information is sent based on instant messaging
CN107239725A (en) * 2016-03-29 2017-10-10 阿里巴巴集团控股有限公司 A kind of information displaying method, apparatus and system
CN107704626A (en) * 2017-10-30 2018-02-16 北京萌哥玛丽科技有限公司 A kind of control method and control device that user is searched based on recognition of face

Also Published As

Publication number Publication date
CN110555171A (en) 2019-12-10

Similar Documents

Publication Publication Date Title
CN110555171B (en) Information processing method, device, storage medium and system
CN108551519B (en) Information processing method, device, storage medium and system
CN108924037B (en) Display method of rich media communication RCS message and mobile terminal
CN110674662B (en) Scanning method and terminal equipment
CN108022274B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN110852332B (en) Training sample generation method and device, storage medium and electronic equipment
CN109947650B (en) Script step processing method, device and system
CN109003194B (en) Comment sharing method, terminal and storage medium
CN108551521B (en) Login information prompting method and mobile terminal
CN109495616B (en) Photographing method and terminal equipment
CN109388456B (en) Head portrait selection method and mobile terminal
CN108460817B (en) Jigsaw puzzle method and mobile terminal
CN109951889B (en) Internet of things network distribution method and mobile terminal
CN108932102A (en) Data processing method, device and mobile terminal
CN113489630A (en) Network distribution method, device, storage medium and electronic terminal
CN110851745B (en) Information processing method, information processing device, storage medium and electronic equipment
CN109085982A (en) content identification method, device and mobile terminal
CN111966436A (en) Screen display control method and device, terminal equipment and storage medium
CN108765522B (en) Dynamic image generation method and mobile terminal
CN107357570B (en) Color filling image generation method and user terminal
CN108255389B (en) Image editing method, mobile terminal and computer readable storage medium
CN109670105B (en) Searching method and mobile terminal
CN111931155A (en) Verification code input method, verification code input equipment and storage medium
CN107734049B (en) Network resource downloading method and device and mobile terminal
CN110825475A (en) Input method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40018732

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant