CN110555171A - Information processing method, device, storage medium and system - Google Patents

Information processing method, device, storage medium and system Download PDF

Info

Publication number
CN110555171A
CN110555171A CN201810272957.XA CN201810272957A CN110555171A CN 110555171 A CN110555171 A CN 110555171A CN 201810272957 A CN201810272957 A CN 201810272957A CN 110555171 A CN110555171 A CN 110555171A
Authority
CN
China
Prior art keywords
information
image
label
display
target face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810272957.XA
Other languages
Chinese (zh)
Inventor
廖戈语
钟庆华
卢锟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810272957.XA priority Critical patent/CN110555171A/en
Publication of CN110555171A publication Critical patent/CN110555171A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

the embodiment of the invention discloses an information processing method, an information processing device, a storage medium and an information processing system. The embodiment of the invention extracts the target face information from the image to be recognized by acquiring the image to be recognized; sending the target face information to a server, and receiving a first user identifier obtained by the server according to the target face information in a matching mode and label information associated with the first user identifier; analyzing the label information; and displaying the analyzed label information on the image to be identified. The convenience of user operation is greatly improved, and the flexibility and diversity of information processing are improved.

Description

Information processing method, device, storage medium and system
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an information processing method, apparatus, storage medium, and system.
Background
With the continuous popularization and development of terminals, users increasingly rely on the terminals, and various applications can be installed on the terminals, wherein instant messaging applications are widely used, and the users can complete communication and interaction with friends through the instant messaging applications, for example, view friend impressions of the friends.
In the prior art, a terminal generally displays a plurality of impression tags of a friend in a display interface of a friend impression of the friend. Wherein, the impression label of the friend is the keyword evaluation of other users to the friend. By checking the friend impression of the friend, the friend can be quickly and preliminarily known. Or, the interesting interaction with the friends is completed by adding impression labels to the friends.
In the research and practice process of the prior art, the inventor of the present invention finds that in the prior art, the friend impressions of the friends to be checked and added are all based on a relationship chain in the network, that is, the friend impressions of the friends can be checked and added only when the friend becomes a network friend, so that the operation is complicated, the interaction limitation is large, and the information processing mode is single.
Disclosure of Invention
embodiments of the present invention provide an information processing method, apparatus, storage medium, and system, which aim to improve convenience of operation and improve flexibility and diversity of information processing.
In order to solve the above technical problems, embodiments of the present invention provide the following technical solutions:
an information processing method comprising:
Acquiring an image to be recognized, and extracting target face information from the image to be recognized;
Sending the target face information to a server, and receiving a first user identifier obtained by the server according to the target face information in a matching mode and label information associated with the first user identifier;
analyzing the label information;
And displaying the analyzed label information on the image to be identified.
an information processing apparatus comprising:
the extraction unit is used for acquiring an image to be recognized and extracting target face information from the image to be recognized;
The receiving and sending unit is used for sending the target face information to a server and receiving a first user identifier obtained by the server according to the target face information and label information associated with the first user identifier;
The analyzing unit is used for analyzing the label information;
And the display unit is used for displaying the analyzed label information on the image to be identified.
in some embodiments, the extraction unit includes:
the intercepting and determining subunit is used for acquiring an image to be identified, identifying a target face image in the image to be identified, intercepting the target face image, and determining the target face image as target face information; or
The extraction and determination unit is used for acquiring an image to be recognized, recognizing a target face image in the image to be recognized, extracting face characteristic point information in the target face image, and determining the face characteristic point information as target face information.
In some embodiments, the information processing apparatus further includes a face information acquisition unit, an acquisition unit, and a binding transmission unit;
the face information acquisition unit is used for acquiring preset face information;
the acquisition unit is used for acquiring a second user identifier associated with the local client;
and the binding sending unit is used for binding the preset face information and the second user identification and sending the bound preset face information and the second user identification to a server.
in some embodiments, the information processing apparatus further includes an identifier obtaining unit, a first judging unit, an editing unit, and a first control display unit;
The identification acquisition unit is used for acquiring the received first user identification and acquiring a second user identification associated with the local client;
the first judging unit is used for judging whether the first user identification is consistent with the second user identification;
The editing unit is used for setting the displayed label information into an editable state when the first user identification is judged to be consistent with the second user identification;
And the first control display unit is used for displaying a first control when the first user identification is judged to be inconsistent with the second user identification, and the first control is used for adding label information to the client associated with the first user identification.
In some embodiments, the information processing apparatus further includes a second determination unit, a second control generation unit, and a third control generation unit;
A second determining unit, configured to determine whether the first subscriber identity exists on the local client;
A second control generating unit, configured to generate a second control when it is determined that the first user identifier exists on the local client, where the second control is used to send a message to a client associated with the first user identifier;
And a third control generating unit, configured to generate a third control when it is determined that the first user identifier does not exist on the local client, where the third control is configured to send a request for adding a friend to a client associated with the first user identifier.
A storage medium storing a plurality of instructions, the instructions being suitable for a processor to load so as to execute the steps of the information processing method.
an information handling system, the system comprising: a terminal and a server;
The terminal comprises the information processing device;
The server is configured to: the method comprises the steps of receiving target face information sent by a terminal, matching according to the target face information to obtain a first user identification corresponding to the target face information and label information associated with the first user identification, and sending the first user identification and the label information associated with the first user identification to the terminal.
The embodiment of the invention extracts the target face information from the image to be recognized by acquiring the image to be recognized; sending the target face information to a server, and receiving a first user identifier obtained by the server according to the target face information in a matching mode and label information associated with the first user identifier; analyzing the label information; and displaying the analyzed label information on the image to be identified. Because the scheme can rapidly identify the target face information on the image, automatically acquire the label information of the user associated with the target face information according to the target face information and display the label information on the image to be identified, compared with the existing scheme which only depends on network relationship for information interaction, the scheme can greatly improve the convenience of user operation and the flexibility and diversity of information processing.
drawings
in order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a scenario of an information handling system provided by an embodiment of the present invention;
FIG. 2 is a flow chart of an information processing method according to an embodiment of the present invention;
FIG. 3 is another schematic flow chart of an information processing method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an image processing interface provided by an embodiment of the present invention;
FIG. 5 is another schematic diagram of an image processing interface provided by an embodiment of the invention;
FIG. 6 is another schematic diagram of an image processing interface provided by an embodiment of the invention;
FIG. 7 is another schematic diagram of an image processing interface provided by an embodiment of the invention;
FIG. 8 is a timing diagram illustrating an information processing method according to an embodiment of the present invention;
FIG. 9 is a schematic flow chart of an information processing method according to an embodiment of the present invention;
FIG. 10a is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention;
FIG. 10b is a schematic diagram of another structure of an information processing apparatus according to an embodiment of the present invention;
FIG. 10c is a schematic diagram of another structure of an information processing apparatus according to an embodiment of the present invention;
FIG. 10d is a schematic diagram of another structure of an information processing apparatus according to an embodiment of the present invention;
Fig. 11 is a schematic structural diagram of an information processing system according to an embodiment of the present invention.
Fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
the embodiment of the invention provides an information processing method, an information processing device, a storage medium and an information processing system.
referring to fig. 1, fig. 1 is a schematic view of a scenario of an information processing system according to an embodiment of the present invention, including: the terminal 10 and the server 20 may be connected through a communication network, and the communication network may include a wireless network and a wired network, wherein the wireless network includes one or more of a wireless wide area network, a wireless local area network, a wireless metropolitan area network, and a wireless personal area network. The network includes network entities such as routers, gateways, etc., which are not shown in the figure. The terminal 10 may interact with the server 20 via a communication network, for example, by downloading an application (e.g., an instant messaging application) from the server 20.
The information processing system may include an information processing apparatus, which may be specifically integrated in a terminal having computing capability, such as a tablet computer, a mobile phone, a notebook computer, a desktop computer, and the like, which has a storage unit and is equipped with a microprocessor, in fig. 1, the terminal is the terminal 10 in fig. 1, and various applications required by a user, such as an instant messaging application having an information interaction function, may be installed in the terminal 10. The terminal 10 may be configured to obtain an image to be recognized, extract target face information from the image to be recognized, send the target face information to the server 20, receive a first user identifier obtained by the server 20 according to matching of the target face information and tag information associated with the first user identifier, and analyze the tag information through the terminal 10; and displaying the analyzed label information on the image to be identified, and the like.
The information processing system may further include a server 20, which is mainly configured to receive target face information sent by the terminal 10, perform matching according to the target face information to obtain a first user identifier corresponding to the target face information and tag information associated with the first user identifier, and send the first user identifier and the tag information associated with the first user identifier to the terminal 10. The information processing system may further include a memory configured to store an information base including an association relationship between the user identifier and the face information, an association relationship between the user identifier and the tag information, and the like, so that the server may acquire the first user identifier and the tag information associated with the first user identifier from the memory and transmit the first user identifier and the tag information to the terminal 10.
it should be noted that the scenario diagram of the information processing system shown in fig. 1 is only an example, and the information processing system and the scenario described in the embodiment of the present invention are for more clearly illustrating the technical solution of the embodiment of the present invention, and do not form a limitation on the technical solution provided in the embodiment of the present invention.
The following are detailed below.
The first embodiment,
In the present embodiment, description will be made from the perspective of an information processing apparatus, which may be integrated in a terminal having an arithmetic capability, such as a tablet computer, a mobile phone, or the like, which has a storage unit and a microprocessor mounted thereon.
An information processing method comprising: acquiring an image to be recognized, and extracting target face information from the image to be recognized; sending the target face information to a server, and receiving a first user identifier obtained by the server according to the target face information in a matching mode and label information associated with the first user identifier; analyzing the label information; and displaying the analyzed label information on the image to be identified.
referring to fig. 2, fig. 2 is a flowchart illustrating an information processing method according to an embodiment of the invention. The information processing method includes:
in step 101, an image to be recognized is acquired, and target face information is extracted from the image to be recognized.
the image to be recognized may be a picture in a video stream acquired in real time by a camera, or a picture cached or stored in a terminal, and the format of the image may be BitMaP (BitMaP, BMP), Joint Photographic Experts Group (JPEG), Graphics Interchange Format (GIF), or the like.
In some embodiments, the image to be recognized may be obtained by the following operations, a certain client on the terminal, such as an Instant Messaging (IM) client, may be opened, a user identifier and a password are input, and the client may enter a display main interface corresponding to the user identifier, where the display main interface is a first interface displayed after logging in through the user identifier and the password. The main interface comprises a shortcut operation control, the shortcut operation control is a shortcut entrance used for triggering to acquire an image to be identified, when the fact that a user clicks the shortcut operation control is detected, a camera assembly is called to acquire the image to be identified, and the acquired image to be identified is displayed on a display screen. Optionally, the image to be recognized may further include a camera switching control and an album control, where the camera switching control is a shortcut entry for switching between a front camera and a rear camera, and specifically, a user may switch between the front camera and the rear camera by clicking the camera switching control to obtain the image to be recognized. The album control is a quick entry for calling the album on the terminal, and specifically, a user can call the album on the terminal by clicking the album control, and then select a certain picture in the album as an image to be identified.
In some embodiments, the step of acquiring an image to be recognized and extracting target face information from the image to be recognized may include:
Acquiring an image to be recognized, recognizing a target face image in the image to be recognized, intercepting the target face image, and determining the target face image as target face information; or
The method comprises the steps of obtaining an image to be recognized, recognizing a target face image in the image to be recognized, extracting face characteristic point information in the target face image, and determining the face characteristic point information as target face information.
The face image contains rich pattern features, such as histogram features, color features, template features, structural features, and Haar features (a Haar feature reflects gray level change of an image, and a pixel division module finds a difference value). Therefore, the image to be recognized can be subjected to feature scanning, and the target face image in the image to be recognized is determined. Optionally, the target face image may be highlighted by a rectangular frame on the image to be recognized, or may be highlighted by a circular ring frame on the image to be recognized.
in some embodiments, the step of acquiring the image to be recognized and recognizing the target face image in the image to be recognized may include:
(1) Analyzing an image to be recognized and determining a face image on the image to be recognized;
(2) Judging whether the number of the face images is multiple or not;
(3) Generating prompt information;
(4) And determining the face image as a target face image.
Because the image to be recognized may include a plurality of face images, the face feature information on the image to be recognized is scanned first to determine all the face images on the image to be recognized.
Optionally, whether the number of the face images is multiple is judged, and when the number of the face images is multiple, the step (3) is executed to generate prompt information, wherein the prompt information is used for prompting a user to select a target face image and receiving the target face image selected by the user according to the prompt information. Specifically, a pop-up window prompt message may be generated to prompt the user to determine the target face image, and the user may determine the face image as the target face image by clicking a certain face image in the image to be recognized. And (5) when the number of the face images is judged not to be multiple, executing the step (4) and determining the unique face image as the target face image.
specifically, after the target face image is determined by the above method, in an embodiment, the terminal may intercept the target face image and use the target face image as the target face information. In another embodiment, the terminal may perform image preprocessing such as gray scale correction and noise filtering on the target face image, and extract the face feature points from the processed target face image. The facial feature points may include geometric descriptions of the local constituent points of the eyes, nose, mouth, and chin.
In step 102, the target face information is sent to a server, and a first user identifier and tag information associated with the first user identifier, which are obtained by the server according to the target face information, are received.
The first user identifier may include information such as a user name, a client account, an account of an instant messaging tool, an International Mobile Equipment Identity (IMEI), and/or a mailbox account.
It should be noted that the server may include an installation package file of a certain client, and for better description, the installation package file of the instant messaging client is used for example. The terminal can download the installation package file of the instant messaging client from the server, and decompress and install the installation package file on the terminal to generate a local client installed on the terminal. The information of the local client is provided by the server, when the local client is opened for the first time, the user can enter a login interface, the user can input a second user identifier (such as an account) and a password on the login interface for verification login, if the user does not have the second user identifier, the user can register, and the second user identifier and the password information are stored in the server. Each user identifier is associated with social data information corresponding to the account, and the social data information may include personal data information, tag information, friend information, roaming chat log information, and the like of the user. The tag information can be popular to know as 'friend impression', the function of the tag information can be used for evaluating keywords of friends, and the friends can also evaluate keywords of themselves. Finally, all the evaluations received by one user are gathered together and displayed to other friends through the label information. The user identification (account number) and its associated social profile information corresponding to the account number are stored in the server.
Based on this, in some embodiments, before the step of acquiring the image to be recognized and extracting the target face information from the image to be recognized, the method may further include:
(1) Collecting preset face information;
(2) Acquiring a second user identifier associated with the local client;
The second user identifier may include information such as a user name, a client account, an account of an instant messaging tool, an International Mobile Equipment Identity (IMEI), and/or a mailbox account.
(3) And binding the preset face information and the second user identification, and sending the bound preset face information and the second user identification to a server.
specifically, the user may perform a pre-binding operation by opening a local client on the terminal, that is, by selecting a preset image, extracting face information to obtain preset face information, and it is to be specifically noted that the preset face information may be face information of the user himself.
Then, the terminal obtains a second user identifier associated with the local client, for example, account information currently logged in the local client (i.e., account information of the user himself), binds the second user identifier with the preset face information, and packages and sends the bound preset face information and the second user identifier to the server, so that the server stores the binding relationship between the preset face information and the second user identifier.
After the target face information is extracted by the terminal, the target face information is sent to the server, and therefore, the binding relationship between the face information and the user identifier is pre-stored in the server, after the target face information is received by the server, matching is performed according to the target face information, a first user identifier (such as an account number) which is correspondingly bound by the target face information and label information associated with the first user identifier are obtained, and the first user identifier and the label information associated with the first user identifier are returned to the terminal.
In one embodiment, the server may return the first user identifier and the tag information associated with the first user identifier to the terminal, and may also return the profile information associated with the first user identifier to the terminal.
In step 103, the tag information is parsed.
The terminal can be integrated with a Software Development Kit (SDK) with an augmented reality function, and the augmented reality control is loaded through the software development Kit and the tag information is analyzed through the augmented reality control.
It should be noted that Augmented Reality (AR), which is a new technology for seamlessly integrating real world information and virtual world information, is a technology for superimposing entity information (visual information, sound, taste, touch, etc.) that is difficult to be experienced in a certain time space range of the real world originally, after simulation through scientific technologies such as computers, virtual information is applied to the real world and is perceived by human senses, so that a sensory experience beyond Reality is achieved. The real environment and the virtual object are superimposed on the same picture or space in real time and exist simultaneously.
in some embodiments, the step of parsing the tag information may include:
(1) Analyzing the label information, and determining the number of labels and the style configuration information corresponding to the labels;
(2) and analyzing the number of the labels and the style configuration information corresponding to the labels through the augmented reality control, and determining the display position information corresponding to the labels.
The tag information is analyzed, the number of tags corresponding to the tag information and style configuration information corresponding to each tag are determined, and the style configuration information is parameter information when the tags are displayed, such as parameter information of tag display size, font, display style and the like.
in some embodiments, the step of analyzing the tag information and determining the number of tags and the style configuration information corresponding to the tags may include: and analyzing the label information to obtain the number of labels corresponding to the label information, and obtain the display style parameter, the display color parameter, the display size parameter, the display font parameter and the like corresponding to the labels. The number of the tags is the sum of the number of the other users performing keyword evaluation on the client corresponding to the first user identifier. For example, the number of the tags of the first ue is three, which are "gentle", "lovely", and "beautiful", respectively. The display style parameter is a display style of the label, namely a pattern style of the label frame, the display color parameter is a display color parameter of the label, namely a display color of the label frame, the display size parameter is a display size parameter of the label, namely a display size of the label frame, and the display font parameter is a display font parameter of a font in the label, namely a display font type of the font in the label frame.
Since the tag is finally displayed in the form of augmented reality on the image to be recognized, the tag needs to be arranged in the display position. Further, the number of the labels is analyzed through the augmented reality control, and the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the labels are analyzed, and the display position information corresponding to the labels is determined according to the analysis result. The more the number of tags is, the more dense the tag display positions are, and the less the number of tags is, the looser the tag display positions are.
In step 104, the parsed label information is displayed on the image to be recognized.
The terminal displays the analyzed label information on the image to be recognized in an augmented reality mode through the augmented reality control, namely, the layer pattern of the image to be recognized is analyzed first, and the display layer corresponding to the face image is determined. And secondly, floating the analyzed label information on a display layer corresponding to the face image in an augmented reality mode. In an embodiment, the terminal may further display the parsed tag information on the image to be recognized through Virtual Reality (VR) or other display forms.
In some embodiments, the step of displaying the parsed label information on the image to be recognized may include:
Initializing and loading the label according to a display style parameter (such as a pattern style of a label frame), a display color parameter (such as a display color of the label frame), a display size parameter (such as a display size of the label frame) and a display font parameter (such as a display font type of a font in the label frame) corresponding to the label to obtain a target label; and displaying the target label on the image to be identified according to the display position information.
And initializing and loading the label according to the pattern style of the label frame corresponding to the label, the display color of the label frame, the display size of the label frame and the display font type of the font in the label frame to obtain the target label corresponding to the first user identifier.
then, the target label is floated on the image to be recognized in an augmented reality mode according to the display position information.
In one embodiment, the terminal can display the personal data information on the image to be recognized in a tag form in addition to displaying the target tag on the image to be recognized in an augmented reality form according to the display position information.
optionally, the target label and the profile information are preferentially displayed around the target face image.
In some embodiments, after the step of displaying the parsed tag information on the image to be recognized, the method may further include:
(1) Acquiring a received first user identifier and acquiring a second user identifier associated with a local client;
(2) judging whether the first user identification is consistent with the second user identification;
(3) setting the displayed label information to be in an editable state;
(4) And displaying a first control, wherein the first control is used for adding label information to the client associated with the first user identification.
after the analyzed tag information is implemented on the image to be recognized in an augmented reality mode, the user can quickly learn about the first user preliminarily through the displayed tag, and the social efficiency of the user is improved.
And then the terminal acquires the received first user identification and acquires a second user identification associated (logged in) on the local client, judges whether the first user identification is consistent with the second user identification, and when the first user identification is judged to be consistent with the second user identification, indicates that the current user is looking up the label information corresponding to the account information of the current user, executes the step (3), and sets the displayed label information to be in an editable state, namely the user can delete the label or modify the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the label.
Further, when the first user identifier is judged to be inconsistent with the second user identifier, it is indicated that the current user is viewing the tag information corresponding to the account information of the other user. And (4) executing a step (4), displaying a first control, wherein the first control is displayed at the bottom end of the image to be recognized, a label-pasting character pattern can be displayed on the first control, when a user clicks the first control, a label selection option pops up, and after the user selects a corresponding label, the selected label can be added to the client side associated with the first user identification.
In some embodiments, after the displaying the first control, the method may further include:
(1.1) judging whether a first user identifier exists on a local client;
(1.2) generating a second control, wherein the second control is used for sending a message to the client associated with the first user identification;
And (1.3) generating a third control, wherein the third control is used for sending a friend adding request to the client associated with the first user identification.
And (3) when the first user identification is judged to exist on the local client, the first user identification and a second user identification associated with the local client are in a friend relationship, the step (1.2) is executed, a second control is generated, and the user can send a message to the client corresponding to the first user identification by clicking the second control. And (3) when the second user identification does not exist on the local client, the first user identification is not in a friend relationship with the second user identification associated with the local client, the step (1.3) is executed, a third control is generated, and the user can send a friend adding request to the client corresponding to the first user identification by clicking the third control.
as can be seen from the above, in the embodiment of the present invention, the target face information is extracted from the image to be recognized by acquiring the image to be recognized; sending the target face information to a server, and receiving a first user identifier obtained by the server according to the target face information in a matching mode and label information associated with the first user identifier; analyzing the label information; and displaying the analyzed label information on the image to be identified. Because the scheme can rapidly identify the target face information on the image, automatically acquire the label information of the user associated with the target face information according to the target face information and display the label information on the image to be identified, compared with the existing scheme which only depends on network relationship for information interaction, the scheme can greatly improve the convenience of user operation and the flexibility and diversity of information processing.
example II,
The method described in the first embodiment is further illustrated by way of example.
in this embodiment, an example will be described in which the client is an instant messaging client and the information processing apparatus is specifically integrated in a terminal.
Referring to fig. 3, fig. 3 is another schematic flow chart of an information processing method according to an embodiment of the invention. The method flow can comprise the following steps:
In step 201, the terminal collects preset face information, acquires a second user identifier associated with the local client, binds the preset face information and the second user identifier, and sends the bound preset face information and the second user identifier to the server.
It should be noted that, after the terminal downloads the installation package corresponding to the instant messaging client for decompression and installation, an instant messaging client is correspondingly generated on the terminal, the instant messaging client installed on the terminal is also called a local client, an account can be logged on the local client, the account information in the account information storage server corresponding to the account can include social information corresponding to the account, and the social information can include personal information, tag information, friend information, roaming chat record information, and the like of the user. The tag information can be popular to know as 'friend impression', the function of the tag information can be used for evaluating keywords of friends, and the friends can also evaluate keywords of themselves. Finally, all the evaluations received by one user are gathered together and displayed to other friends through the label information.
The user can operate the terminal to open the local client, and call the camera assembly to collect the preset face information, wherein the preset face information can be the face information of the user. After the preset face information is collected. The terminal obtains account information logged in the local client, extracts a second user identifier (such as an account) from the account information, binds the second user identifier and preset face information, and sends the second user identifier and the preset face information to the server.
For example, as shown in fig. 4, a user opens a local client through the terminal 10, invokes a camera assembly to acquire a current image, and the user may further perform a switching operation of a front camera or a rear camera by clicking the camera switching control 11, and the terminal 10 performs feature scanning on the current image, determines a preset face image in the current image, and highlights the preset face image in a rectangular frame 12. Further, image preprocessing such as gray level correction and noise filtering is performed on the preset face image, then face characteristic point information is extracted from the preset face image after the image preprocessing, and the face characteristic point information is determined as the preset face information. The terminal 10 obtains account information logged in the local client, such as a second user identifier "123456", binds the second user identifier "123456" with the preset face information, and sends the second user identifier "123456" and the preset face information after binding to the server. The method and the device realize the operation of associating the account of the user with the preset face information.
It is understood that, after receiving the second user identifier "123456" and the preset face information, the server stores the binding relationship in the server.
in step 202, the terminal acquires an image to be recognized, recognizes a target face image in the image to be recognized, extracts face feature point information in the target face image, and determines the face feature point information as target face information.
The user can operate the terminal to open the local client, call the camera assembly to obtain an image to be recognized, perform feature scanning on the image to be recognized, recognize a target face image in the image to be recognized, perform image preprocessing such as gray correction and noise filtering on the target face image, further extract face characteristic point information from the target face image after image preprocessing, and determine the face characteristic point information as target face information.
For example, as shown in fig. 4, a user opens a local client through the terminal 10, invokes a camera assembly to collect a current image to be recognized, and the user may further click the camera switching control 11 to perform a switching operation of a front camera or a rear camera, based on which, the user may obtain an image to be recognized of an acquaintance or a stranger through the camera assembly on the terminal 10, the terminal 10 performs feature scanning on the current image to be recognized, determines a target face image in the current image, and highlights the target face image with a rectangular frame 12. Further, image preprocessing such as gray level correction and noise filtering is performed on the target face image, then face characteristic point information is extracted from the target face image after the image preprocessing, and the face characteristic point information is determined as target face information.
In step 203, the terminal sends the target face information to the server.
The terminal may send a matching request instruction to the server, where the matching request instruction carries the target face information, and is used to enable the server to obtain, according to the target face information, a first user identifier (such as an account number) and tag information associated with the first user identifier by matching, and return the first user identifier and the tag information associated with the first user identifier to the terminal.
In step 204, after receiving the target face information, the server obtains a first user identifier and tag information associated with the first user identifier according to the target face information, and sends the first user identifier and the tag information associated with the first user identifier to the terminal.
After receiving the target face information, the server performs feature point similarity matching according to the face feature point information in the target face information and the face feature point information stored in the server, determines the face feature point information of which the similarity value with the face feature point information in the target face information exceeds a threshold, obtains a first user identifier (such as an account number) "1234567" bound with the face feature point information and label information associated with the first user identifier "1234567" according to a binding relationship, and sends the first user identifier and the label information associated with the first user identifier to a terminal, wherein the label information may include the number of labels, and display style parameters, display color parameters, display size parameters and display font parameters corresponding to the labels. Optionally, the server may also send the personal data information corresponding to the first user identifier to the terminal.
in step 205, after receiving the first user identifier sent by the server and the tag information associated with the first user identifier, the terminal analyzes the tag information to obtain the number of tags corresponding to the tag information, and obtain a display style parameter, a display color parameter, a display size parameter, and a display font parameter corresponding to the tags.
the terminal analyzes and receives the label information after receiving the first user identification and the label information associated with the first user identification sent by the server, obtains the number of labels corresponding to the label information, and obtains a display style parameter, a display color parameter, a display size parameter and a display font parameter corresponding to the labels, wherein the display style parameter is a display style of the labels, such as a pattern style of a label frame, the display color parameter is a display color parameter of the labels, such as a display color of the label frame, the display size parameter is a display size parameter of the labels, such as a display size of the label frame, and the display font parameter is a display font parameter of fonts in the labels, such as a display font type of the fonts in the label frame. In one embodiment, the tag information may be stored in a table format, as shown in table 1.
Table one:
As can be seen from table 1 above, the number of tags is 3, the tag contents are "goddess", "lovers" and "how to be over", the display style parameter is "cool sea breeze" system, the display color parameter is "blue", the display size parameter is "(10, 5)" where a value of 10 indicates the length of the tag, a value of 5 indicates the width of the tag, and the display font parameter is "regular script". It should be noted that the above examples do not limit the present invention, and the tag information may be in other formats.
In step 206, the terminal analyzes the number of the labels and analyzes the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the labels through the augmented reality control, and determines the display position information corresponding to the labels according to the analysis result.
The terminal loads an augmented reality control through a software development kit with an augmented reality function integrated on the terminal, analyzes the number of labels and analyzes display style parameters, display color parameters, display size parameters and display font parameters corresponding to the labels through the augmented reality control, arranges and predicts the labels according to an analysis result to determine display position information corresponding to the labels, and the display position information is used for indicating the display position of the labels on the image to be identified.
In step 207, the terminal performs initialization loading on the tag according to the display style parameter, the display color parameter, the display size parameter, and the display font parameter corresponding to the tag, so as to obtain the target tag.
In step 208, the terminal displays the target tag on the image to be recognized according to the display position information.
And the terminal initializes and loads the label according to a display style parameter 'cool sea wind', a display color parameter 'blue', a display size parameter '(10, 5)' and a display font parameter 'regular font', so as to obtain the target label.
And then, displaying the target labels on the image to be identified in a one-to-one correspondence manner according to the display position information.
For example, as shown in fig. 5, the terminal 10 performs initialization loading on the tag according to the display style parameter "cool sea wind", the display color parameter "blue", the display size parameter "(10, 5)", and the display font parameter "regular font" corresponding to the tag, and obtains the target tag 13. Then, the target tags 13 are displayed on the image to be recognized of the terminal 10 in a one-to-one correspondence according to the display position information. Optionally, the personal data information 15 "sui, & bi-seater", insomnia, and too much worries to draw … "corresponding to the first user identifier" 1234567 "may be displayed on the top left corner of the face image.
in step 209, the terminal obtains the received first user identifier and obtains a second user identifier associated with the local client.
In step 210, the terminal determines whether the first subscriber identity is consistent with the second subscriber identity.
when the first user identifier is judged to be consistent with the second user identifier associated with the local client, it is indicated that the target face image in the image to be recognized corresponds to the face image of the terminal user, and step 211 is executed. When it is determined that the first user identifier is not consistent with the second user identifier associated with the local client, it indicates that the target face image in the image to be recognized is not the face image of the terminal user, and may be a friend or stranger of the terminal user, and step 212 is executed.
For example, the terminal obtains the received first user identifier "1234567" and obtains the second user identifier "123456" associated with the local client, determines that the first user identifier "1234567" is inconsistent with the second user identifier "123456", and performs step 212.
In step 211, the displayed tag information is set to an editable state.
When the first user identification is judged to be consistent with the second user identification associated with the local client, the fact that the user views the label information and the personal data information corresponding to the account information of the user is explained, the displayed label information and the displayed personal data information can be set to be in an editable state, and the user can delete the label or modify the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the label.
in step 212, a first control is displayed.
when the first user identification is judged to be inconsistent with the second user identification associated with the local client, the fact that the user views the label information and the personal data information corresponding to the account information of other users is shown, and a first control can be displayed and used for adding the label information to the client associated with the first user identification.
For example, as shown in fig. 5 and fig. 6, when the terminal 10 in fig. 5 determines that the first user identifier "1234567" is not consistent with the second user identifier "123456", the terminal 14 may generate a first control 14 "label", and when the user clicks the first control 14 "label", as shown in fig. 6, the terminal 10 may generate a label selection control 16, and the terminal 10 may randomly generate hot labels "mei", "2333", "kanji", and the like for the user to select.
Then, the server finds the storage position of the first user identifier "1234567" in the storage data, adds the label "maihanzi" to the label information associated with the first user identifier "1234567", and sends the first prompt information to the client corresponding to the first user identifier, so as to prompt that the user adds the label information. And sending second prompt information to the local client, prompting that the label is added successfully, and displaying the newly added label 'women Hanzi' on the image to be identified.
optionally, after the tag is successfully added, the terminal 10 may determine whether the first user identifier "1234567" exists on the local client, and when the terminal 10 determines that the first user identifier "1234567" exists on the local client, it may be stated that the first user identifier "1234567" is a client friend of the second user identifier "123456", and may generate a second control, where the second control is used to send a message to a client associated with the first user identifier "1234567". When the user clicks on the second control, the terminal 10 automatically jumps to a dialog box with the first user identification "1234567".
As shown in fig. 7, when the terminal 10 determines that the first user identifier "1234567" does not exist on the local client, which indicates that the first user identifier "1234567" is not a client buddy of the second user identifier "123456", a third control 17 may be generated, and is configured to send an add buddy request to a client associated with the first user identifier "1234567".
therefore, when a second user of a user of the local client meets the first user occasionally in a real scene, the local client can be started, the face information scanning function is used, the image to be recognized of the first user is obtained through the camera assembly, the face information of the first user is extracted, the first user identification and the label information of the first user are obtained from the server by using the face information of the first user, the obtained label information is displayed on the upper layer of the image to be recognized, interesting interaction can be carried out, compared with the existing scheme that information interaction can be carried out only depending on a network relationship, the convenience of user operation can be greatly improved, and the flexibility and diversity of information processing are improved.
Example III,
The method described in example two is further illustrated in detail below by way of example.
Referring to fig. 8, fig. 8 is a timing diagram illustrating an information processing method according to an embodiment of the invention. The method flow can comprise the following steps:
in step S1, the terminal acquires preset face information, acquires a second user identifier associated with the local client, and binds the preset face information and the second user identifier.
for example, a user opens a local client through a terminal, calls a camera assembly to acquire a current image, and the terminal performs feature scanning on the current image to determine a preset face image in the current image. Further, image preprocessing such as gray level correction and noise filtering is performed on the preset face image, then face characteristic point information is extracted from the preset face image after the image preprocessing, and the face characteristic point information is determined as the preset face information. The terminal acquires account information logged in by a local client, for example, a second user identifier (account number) "123456", and binds the second user identifier (account number) "123456" with preset face information.
In step S2, the terminal sends the bound preset face information and the second user identifier to the server.
for example, the terminal sends the second user identifier (account number) "123456" after binding to the server together with the preset face information. The method and the device realize the operation of associating the account of the user with the preset face information.
In step S3, the server stores the bound preset face information and the second user identifier.
For example, after receiving the second user identifier (account number) "123456" and the preset face, the server stores the binding relationship in a storage space of the server.
In step S4, the terminal acquires the image to be recognized, recognizes a target face image in the image to be recognized, extracts face feature point information in the target face image, and determines the face feature point information as target face information.
The user opens the local client through the terminal, calls the camera assembly to acquire the current image to be recognized, and on the basis, the user can acquire the image to be recognized of an acquaintance or a stranger through the camera assembly on the terminal, and the terminal can perform feature scanning on the current image to be recognized to determine a target face image in the current image. Further, image preprocessing such as gray level correction and noise filtering is performed on the target face image, then face characteristic point information is extracted from the target face image after the image preprocessing, and the face characteristic point information is determined as target face information.
in step S5, the terminal transmits the target face information to the server.
In step S6, the server obtains the first user identifier and the tag information associated with the first user identifier according to the matching of the target face information.
For example, after receiving the target face information, the server performs feature point similarity matching according to the face feature point information in the target face information and the face feature point information stored in the server, determines the face feature point information of which the similarity value with the face feature point information in the target face information exceeds a threshold, and obtains a first user identifier (such as an account number) "1234567" and label information associated with the first user identifier "1234567" bound with the face feature point information according to a binding relationship, where the label information may include the number of labels, and display style parameters, display color parameters, display size parameters, and display font parameters corresponding to the labels.
In step S7, the server transmits the first user identification and the tag information associated with the first user identification to the terminal.
for example, the server transmits the first user identifier "123457" and the label information associated with the first user identifier to the terminal.
In step S8, the terminal parses the tag information, and displays the parsed tag information on the image to be recognized.
For example, the terminal firstly analyzes the layer pattern of the image to be recognized, and determines the display layer corresponding to the face image. And secondly, floating the analyzed label information on a display layer corresponding to the face image in an augmented reality mode.
Example four,
the method described in example three is illustrated in further detail below by way of example.
Referring to fig. 9, fig. 9 is another schematic flow chart of an information processing method according to an embodiment of the present invention.
It should be noted that, the server in the method flow may be a server cluster formed by a plurality of servers, and the server cluster may include: a face recognition server and a label server. The face recognition server and the label server can be in communication connection, the face recognition server and the label server can also be in communication connection with the terminal respectively, and the communication connection can be established based on a wired network or a wireless network.
The face recognition server is mainly used for recognizing target face information, for example, a face database is stored in the face recognition server, and the face database is used for storing a corresponding relationship between the first user identifier and the face information.
and the label server is mainly used for acquiring the label information of the first user identifier. For example, a tag database is stored in the tag server, and the tag database is used for storing a corresponding relationship between the first user identifier and the tag information.
Specifically, the method flow may include:
In step S101, the client acquires an image to be recognized through the camera, and extracts target face information from the image to be recognized.
The method comprises the steps of calling a camera assembly through a client to collect an image to be recognized, recognizing a target face image in the image to be recognized, extracting face characteristic point information in the target face image, and determining the face characteristic point information as target face information.
in step S102, the client detects whether the target face information meets a preset recognition condition.
The client detects whether the face feature point information in the target face information has a preset number of face feature points, and when the client detects that the face feature point information has the preset number of face feature points, it is determined that the target face information detected by the client meets a preset recognition condition, and step S103 is executed. When the client detects that the face feature point information does not have the preset number of face feature points, the client determines that the target face information detected by the client does not meet the preset identification condition, and returns to execute the step S101.
In step S103, the client detects whether there is cache information of the target face information.
When the client detects that the target face information meets the preset identification condition, the client detects whether cache information corresponding to the target face information exists on the local terminal, and the cache information can include the association relationship between the target face information and the corresponding first user identifier. When the client detects that there is cache information of the target face information, step S04 is executed. When the client detects that there is no cache information of the target face information, step S105 is performed.
in step S104, the client sends the first user identifier to the face recognition server.
When the client detects that the cache information of the target face information exists, the client can directly send the first user identification corresponding to the target face information in the cache information to the face recognition server. The face recognition server can directly acquire the first user identification, so that the matching process is omitted, and the resources of the server are saved.
In step S105, the client transmits the target face information to the face recognition server.
When the client detects that no cache information of the target face information exists, the client sends the target face information to the face recognition server, so that the face recognition server can match a corresponding first user identifier according to the target face information.
In step S106, the face recognition server matches the face database according to the target face information.
When the face recognition server receives target face information sent by the client, the face recognition server matches the face database according to the face feature point information in the target face information, and finds face information which is stored in the face database and is correspondingly matched with the face feature point in the target face information.
In step S107, the face recognition server obtains a corresponding first user identifier.
the face recognition server acquires a first user identifier corresponding to face information according to a corresponding relation between the first user identifier and the face information stored in a face database. Or directly acquiring the first user identification sent by the client.
in step S108, the face recognition server sends the first user identifier to the tag server.
After the face recognition server acquires the first user identifier, the face recognition server sends the first user identifier to the tag server.
In step S109, the tag server matches the tag database according to the first user identifier, and acquires corresponding tag information.
After receiving the first user identifier, the tag server matches the tag database according to the first user identifier, and since the tag database stores the corresponding relationship between the first user identifier and the tag information, the tag server can acquire the tag information corresponding to the first user identifier.
in step S110, the tag server transmits the tag information to the client.
In step S111, the client parses the tag information, and displays the parsed tag information on the image to be recognized.
After receiving the label information, the client can load the augmented reality control and analyze the label information through the augmented reality control. And displaying the analyzed label information on the image to be identified in an augmented reality mode through the augmented reality control.
in step S112, the client detects whether or not tag information is added.
After the client side realizes the label information on the image to be identified, the user can check the label information and can add labels to the label information. When the client detects that the tag information is added, step S113 is performed. When the client does not detect the addition of the tag information, no operation is performed.
In step S113, the client acquires the tag added by the user.
When the client detects that the label information is added, the client acquires the label added by the user.
In step S114, the client transmits the tag added by the user to the tag server.
the client sends the label added by the user to the label server so that the label server performs updating operation.
In step S115, the tag server updates the tag to the tag database and notifies the first user of the client corresponding to the identifier.
After receiving the tag added by the user, the tag server updates the tag into tag information corresponding to the first user identifier, and simultaneously sends a notification message to a client corresponding to the first user identifier to notify other users that the tag adding operation is performed on the client corresponding to the first user identifier.
in step S116, the tag server obtains updated tag information corresponding to the first user identifier.
after the tag server updates the tag into the tag information corresponding to the first user identifier, the tag server correspondingly obtains the updated tag information corresponding to the first user identifier.
in step S117, the tag server transmits the updated tag information to the client.
And the label server sends the updated label information to the client so as to synchronously display the latest label information on the client.
In step S118, the client parses the updated tag information, and displays the parsed tag information on the image to be recognized.
After receiving the updated label information, the client can load the augmented reality control and re-analyze the updated label information through the augmented reality control. And displaying the analyzed label information on the image to be identified in an augmented reality mode through the augmented reality control so as to realize synchronous display operation.
example V,
In order to better implement the information processing method provided by the embodiment of the present invention, an embodiment of the present invention further provides an apparatus based on the information processing method. The terms are the same as those in the above-described information processing method, and details of implementation may refer to the description in the method embodiment.
Referring to fig. 10a, fig. 10a is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention, wherein the information processing apparatus may include an extracting unit 301, a transceiving unit 302, an analyzing unit 303, a display unit 304, and the like.
The extracting unit 301 is configured to acquire an image to be recognized and extract target face information from the image to be recognized.
The extraction unit 301 may obtain the image to be recognized by opening a certain client on the terminal, such as an instant messaging client, and inputting the user identifier and the password, where the client may enter a display main interface corresponding to the user identifier, where the display main interface is a first interface displayed after logging in through the user identifier and the password. The main interface comprises a shortcut operation control, the shortcut operation control is a shortcut entrance used for triggering to acquire an image to be identified, when the fact that a user clicks the shortcut operation control is detected, a camera assembly is called to acquire the image to be identified, and the acquired image to be identified is displayed on a display screen. Optionally, the image to be recognized may further include a camera switching control and an album control, where the camera switching control is a shortcut entry for switching between a front camera and a rear camera, and specifically, a user may switch between the front camera and the rear camera by clicking the camera switching control to obtain the image to be recognized. The album control is a quick entry for calling the album on the terminal, and specifically, a user can call the album on the terminal by clicking the album control, and then select a certain picture in the album as an image to be identified.
in some embodiments, as shown in fig. 10b, the extraction unit 301 may include:
The interception determination subunit 3011 is configured to obtain an image to be recognized, recognize a target face image in the image to be recognized, intercept the target face image, and determine the target face image as target face information; or
The extraction determination unit 3012 is configured to acquire an image to be recognized, recognize a target face image in the image to be recognized, extract face feature point information in the target face image, and determine the face feature point information as target face information.
the pattern features contained in the face image are rich, such as histogram features, color features, template features, structural features, Haar features and the like. The interception determination subunit 3011 or the extraction determination unit 3012 may perform feature scanning on the image to be recognized, and determine a target face image in the image to be recognized. Optionally, the target face image may be highlighted by a rectangular frame on the image to be recognized, or may be highlighted by a circular ring frame on the image to be recognized.
in some embodiments, the operations of acquiring the image to be recognized and recognizing the target face image in the image to be recognized in the interception determination subunit 3011 or the extraction determination unit 3012 may be specifically configured to:
Analyzing an image to be recognized and determining a face image on the image to be recognized;
judging whether the number of the face images is multiple or not;
When the number of the face images is judged to be multiple, prompt information is generated and used for prompting a user to select a target face image and receiving the target face image selected by the user according to the prompt information;
And when the number of the face images is judged not to be multiple, determining the face images as target face images.
In one embodiment, the clipping determination subunit 3011 may clip the target face image and use the target face image as the target face information. In another embodiment, the extraction determination unit 3012 may perform image preprocessing such as gray scale correction and noise filtering on the target face image, and extract the face feature point information from the processed target face image. The face feature point may include geometric descriptions of local constituent points such as eyes, nose, mouth, and chin, and the face feature point information is determined as the target face information.
The transceiving unit 302 is configured to send the target face information to the server, and receive a first user identifier obtained by the server according to the target face information in a matching manner and tag information associated with the first user identifier.
After the extracting unit 301 extracts the target face information, the transceiving unit 302 sends the target face information to the server, and after the server receives the target face information, the server performs matching according to the target face information to obtain a first user identifier (such as an account) bound to the target face information correspondingly and tag information associated with the first user identifier, and returns the first user identifier and the tag information associated with the first user identifier to the transceiving unit 302.
The parsing unit 303 is configured to parse the tag information.
In some embodiments, as shown in fig. 10c, the parsing unit 303 may include:
An analyzing subunit 3031, configured to analyze the tag information, and determine the number of tags and style configuration information corresponding to the tags;
And the analyzing subunit 3032 is configured to analyze the number of the labels and the style configuration information corresponding to the labels through the augmented reality control, and determine display position information corresponding to the labels.
The analyzing subunit 3031 analyzes the tag information, and the analyzing subunit 3032 determines the number of tags corresponding to the tag information and the style configuration information corresponding to each tag, where the style configuration information is parameter information when the tags are displayed, such as parameter information of tag display size, font, display style, and the like.
In some embodiments, the analyzing subunit 3031 may be specifically configured to: and analyzing the label information to obtain the number of labels corresponding to the label information, and obtain the display style parameter, the display color parameter, the display size parameter, the display font parameter and the like corresponding to the labels.
In some embodiments, the parsing subunit 3032 may be specifically configured to: and analyzing the number of the labels and analyzing the display style parameters, the display color parameters, the display size parameters and the display font parameters corresponding to the labels through the augmented reality control, and determining the display position information corresponding to the labels according to the analysis result.
Since the tag is to be displayed on the image to be recognized last, the parsing subunit 3032 needs to arrange the display position of the tag. Further, the parsing subunit 3032 parses the number of the tags, and parses the display style parameter, the display color parameter, the display size parameter, and the display font parameter corresponding to the tags, and determines the display position information corresponding to the tags according to the parsing result. The more the number of tags is, the more dense the tag display positions are, and the less the number of tags is, the looser the tag display positions are.
and a display unit 304, configured to display the parsed label information on the image to be recognized.
The display unit 304 displays the analyzed label information on the image to be recognized in an augmented reality form through the augmented reality control, that is, the display unit 304 firstly analyzes the layer pattern of the image to be recognized, and determines the display layer corresponding to the face image. And secondly, floating the analyzed label information on a display layer corresponding to the face image in an augmented reality mode.
In some embodiments, the display unit 304 may be specifically configured to: and initializing and loading the label according to the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the label to obtain a target label, and displaying the target label on the image to be identified according to the display position information.
The display unit 304 performs initialization loading on the tag according to the pattern style of the tag frame corresponding to the tag, the display color of the tag frame, the display size of the tag frame, and the display font type of the font in the tag frame, so as to obtain the target tag corresponding to the first user identifier. Then, the target label is floated on the image to be recognized in an augmented reality mode according to the display position information.
In some embodiments, as shown in fig. 10d, the information processing apparatus may further include a face information acquisition unit 305, an acquisition unit 306, a binding transmission unit 307, an identifier acquisition unit 308, a first judgment unit 309, an editing unit 310, a first control display unit 311, a second judgment unit 312, a second control generation unit 313, and a third control generation unit 314.
A face information collecting unit 305, configured to collect preset face information.
An obtaining unit 306, configured to obtain a second user identifier associated with the local client.
a binding sending unit 307, configured to bind the preset face information and the second user identifier, and send the bound preset face information and the second user identifier to the server.
Specifically, the user may perform a pre-binding operation by opening a local client on the terminal, that is, selecting a preset image through the face information acquisition unit 305, and extracting face information to obtain preset face information, where it is to be specifically noted that the preset face information may be face information of the user himself.
then, the obtaining unit 306 obtains a second user identifier associated with the local client, that is, account information currently logged in on the local client (that is, account information of the user himself), and binds the second user identifier with the preset face information, and the binding sending unit 307 packages and sends the bound preset face information and the second user identifier to the server, so that the server stores the binding relationship between the preset face information and the second user identifier.
An identifier obtaining unit 308, configured to obtain the received first user identifier and obtain a second user identifier associated with the local client.
a first judging unit 309, configured to judge whether the first subscriber identity is consistent with the second subscriber identity.
And an editing unit 310 configured to set the displayed tag information to an editable state when it is determined that the first user identifier is consistent with the second user identifier.
And a first control display unit 311, configured to display a first control when it is determined that the first user identifier is inconsistent with the second user identifier, where the first control is used to add label information to a client associated with the first user identifier.
After the display unit 304 implements the analyzed tag information on the image to be recognized in an augmented reality manner, the user can quickly and preliminarily know the first user through the displayed tag, and the social efficiency of the user is improved.
Then, the identifier obtaining unit 308 obtains the received first user identifier and obtains a second user identifier associated (logged in) on the local client, the first determining unit 309 determines whether the first user identifier is consistent with the second user identifier, and when the editing unit 310 determines that the first user identifier is consistent with the second user identifier, it indicates that the current user is viewing the tag information corresponding to the own account information, and sets the displayed tag information to an editable state, that is, the user may delete the tag, or modify the display style parameter, the display color parameter, the display size parameter, and the display font parameter corresponding to the tag.
Further, when the first control display unit 311 determines that the first user identifier is inconsistent with the second user identifier, it indicates that the current user is viewing the label information corresponding to the account information of the other user. And displaying a first control, wherein the first control is displayed at the bottom end of the image to be recognized, a character style of labeling can be displayed on the first control, when a user clicks the first control, a label selection option pops up, and after the user selects a corresponding label, the selected label can be added to the client side associated with the first user identification.
the second determining unit 312 is configured to determine whether the first subscriber identity exists on the local client.
And a second control generating unit 313, configured to generate a second control when it is determined that the first user identifier exists on the local client, where the second control is used to send a message to a client associated with the first user identifier.
A third control generating unit 314, configured to generate a third control when it is determined that the first user identifier does not exist on the local client, where the third control is used to send a friend adding request to the client associated with the first user identifier.
When the second control generating unit 313 determines that the first user identifier exists on the local client, it indicates that the first user identifier is in a friend relationship with a second user identifier associated with the local client, and generates a second control, and the user can send a message to the client corresponding to the first user identifier by clicking the second control. When the third control generating unit 314 determines that the second user identifier does not exist on the local client, it indicates that the first user identifier and the second user identifier associated with the local client are not in a friend relationship, and generates a third control, and the user can send a friend adding request to the client corresponding to the first user identifier by clicking the third control.
The specific implementation of each unit can refer to the previous embodiment, and is not described herein again.
As can be seen from the above, in the embodiment of the present invention, the extraction unit 301 obtains the image to be recognized, and extracts the target face information from the image to be recognized; the transceiving unit 302 sends the target face information to the server, and receives a first user identifier obtained by the server according to the target face information and tag information associated with the first user identifier; the parsing unit 303 parses the tag information; the display unit 304 displays the parsed tag information on the image to be recognized. The target face information on the image can be rapidly identified, the label information of the user related to the target face information is automatically acquired according to the target face information, and the label information is displayed on the image to be identified, so that compared with the existing scheme of information interaction only depending on network relations, the method and the device can greatly improve the convenience of user operation and improve the flexibility and diversity of information processing.
Example six,
accordingly, referring to fig. 11, an information processing system according to an embodiment of the present invention includes an information processing apparatus and a server, where the information processing apparatus may be integrated in a terminal, and the information processing apparatus is any one of the information processing apparatuses provided in the embodiments of the present invention, which can be specifically referred to as embodiment five. For example, taking as an example that the information processing apparatus is specifically integrated in a terminal, then:
The terminal is used for acquiring an image to be recognized, extracting target face information from the image to be recognized, sending the target face information to the server, receiving a first user identifier obtained by the server according to the target face information in a matching mode and label information associated with the first user identifier, analyzing the label information, and displaying the analyzed label information on the image to be recognized.
the server is used for receiving target face information sent by the terminal, matching the target face information according to the target face information to obtain a first user identifier corresponding to the target face information and label information associated with the first user identifier, and sending the first user identifier and the label information associated with the first user identifier to the terminal.
For example, after receiving the target face information, the server performs feature point similarity matching according to the face feature point information in the target face information and the face feature point information stored in the server, determines the face feature point information of which the similarity value with the face feature point information in the target face information exceeds a threshold, obtains a first user identifier (such as an account number) bound with the face feature point information and label information associated with the first user identifier according to a binding relationship, where the label information may include the number of labels, and a display style parameter, a display color parameter, a display size parameter, and a display font parameter corresponding to the labels, and sends the first user identifier and the label information associated with the first user identifier to the terminal. Optionally, the server may also send the personal data information corresponding to the first user identifier to the terminal.
In some embodiments, the server may be further operable to: and receiving the bound preset face information and the second user identification sent by the terminal.
and after receiving the bound second user identification and the preset face information, the server stores the binding relationship in a specific storage space of the server.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
since the information processing system may include any information processing apparatus provided in the embodiment of the present invention, the beneficial effects that can be achieved by any information processing apparatus provided in the embodiment of the present invention can be achieved, and detailed descriptions are given in the foregoing embodiment and are not repeated herein.
example seven,
an embodiment of the present invention further provides a terminal, as shown in fig. 12, the terminal may include a Radio Frequency (RF) circuit 601, a memory 602 including one or more computer-readable storage media, an input unit 603, a display unit 604, a sensor 605, an audio circuit 606, a Wireless Fidelity (WiFi) module 607, a processor 608 including one or more processing cores, and a power supply 609. Those skilled in the art will appreciate that the terminal structure shown in fig. 12 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
The RF circuit 601 may be used for receiving and transmitting signals during a message transmission or communication process, and in particular, for receiving downlink messages from a base station and then processing the received downlink messages by one or more processors 608; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuit 601 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 601 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
the memory 602 may be used to store software programs and modules, and the processor 608 executes various functional applications and information processing by operating the software programs and modules stored in the memory 602. The memory 602 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal, etc. Further, the memory 602 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 602 may also include a memory controller to provide the processor 608 and the input unit 603 access to the memory 602.
The input unit 603 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in one particular embodiment, input unit 603 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 608, and can receive and execute commands sent by the processor 608. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 603 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 604 may be used to display information input by or provided to the user and various graphical user interfaces of the terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 604 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 608 to determine the type of touch event, and the processor 608 then provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 12 the touch sensitive surface and the display panel are two separate components to implement input and output functions, in some embodiments the touch sensitive surface may be integrated with the display panel to implement input and output functions.
the terminal may also include at least one sensor 605, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or the backlight when the terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here.
Audio circuitry 606, a speaker, and a microphone may provide an audio interface between the user and the terminal. The audio circuit 606 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electric signal, which is received by the audio circuit 606 and converted into audio data, which is then processed by the audio data output processor 608, and then transmitted to, for example, another terminal via the RF circuit 601, or the audio data is output to the memory 602 for further processing. The audio circuit 606 may also include an earbud jack to provide communication of peripheral headphones with the terminal.
WiFi belongs to short-distance wireless transmission technology, and the terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 607, and provides wireless broadband internet access for the user. Although fig. 12 shows the WiFi module 607, it is understood that it does not belong to the essential constitution of the terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
the processor 608 is a control center of the terminal, connects various parts of the entire handset using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 602 and calling data stored in the memory 602, thereby performing overall monitoring of the handset. Optionally, processor 608 may include one or more processing cores; preferably, the processor 608 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 608.
The terminal also includes a power supply 609 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 608 via a power management system that may be used to manage charging, discharging, and power consumption. The power supply 609 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the terminal may further include a camera, a bluetooth module, and the like, which will not be described herein. Specifically, in this embodiment, the processor 608 in the terminal loads the executable file corresponding to the process of one or more application programs into the memory 602 according to the following instructions, and the processor 608 runs the application programs stored in the memory 602, thereby implementing various functions:
Acquiring an image to be recognized, and extracting target face information from the image to be recognized; sending the target face information to a server, and receiving a first user identifier obtained by the server according to the target face information in a matching mode and label information associated with the first user identifier; analyzing the label information; and displaying the analyzed label information on the image to be identified.
in the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the information processing method, and are not described herein again.
as can be seen from the above, the terminal according to the embodiment of the present invention may extract the target face information from the image to be recognized by acquiring the image to be recognized; sending the target face information to a server, and receiving a first user identifier obtained by the server according to the target face information in a matching mode and label information associated with the first user identifier; analyzing the label information; and displaying the analyzed label information on the image to be identified. Because the scheme can rapidly identify the target face information on the image, automatically acquire the label information of the user associated with the target face information according to the target face information and display the label information on the image to be identified, compared with the existing scheme which only depends on network relationship for information interaction, the scheme can greatly improve the convenience of user operation and the flexibility and diversity of information processing.
Example eight,
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present invention provides a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in any one of the information processing methods provided by the embodiments of the present invention. For example, the instructions may perform the steps of:
Acquiring an image to be recognized, and extracting target face information from the image to be recognized; sending the target face information to a server, and receiving a first user identifier obtained by the server according to the target face information in a matching mode and label information associated with the first user identifier; analyzing the label information; and displaying the analyzed label information on the image to be identified.
the above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any information processing method provided in the embodiment of the present invention, the beneficial effects that can be achieved by any information processing method provided in the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
the foregoing detailed description is directed to an information processing method, an information processing apparatus, a storage medium, and a terminal according to embodiments of the present invention, and a specific example is applied in the detailed description to explain the principles and implementations of the present invention, and the descriptions of the foregoing embodiments are only used to help understand the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (15)

1. An information processing method, characterized in that the method comprises:
Acquiring an image to be recognized, and extracting target face information from the image to be recognized;
Sending the target face information to a server, and receiving a first user identifier obtained by the server according to the target face information in a matching mode and label information associated with the first user identifier;
Analyzing the label information;
and displaying the analyzed label information on the image to be identified.
2. The processing method according to claim 1, wherein the step of parsing the tag information comprises:
Analyzing the label information, and determining the number of labels and the style configuration information corresponding to the labels;
And analyzing the number of the labels and the style configuration information corresponding to the labels through the augmented reality control, and determining the display position information corresponding to the labels.
3. The processing method according to claim 2, wherein the step of analyzing the tag information and determining the number of tags and the style configuration information corresponding to the tags comprises:
Analyzing the label information to obtain the number of labels corresponding to the label information, and obtain a display style parameter, a display color parameter, a display size parameter and a display font parameter corresponding to the labels;
The step of analyzing the number of the labels and the style configuration information corresponding to the labels through the augmented reality control and determining the display position information corresponding to the labels comprises the following steps: and analyzing the number of the labels and analyzing the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the labels through the augmented reality control, and determining the display position information corresponding to the labels according to the analysis result.
4. the processing method according to claim 3, wherein the step of displaying the parsed label information on the image to be recognized comprises:
Initializing and loading the label according to the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the label to obtain a target label;
and displaying the target label on the image to be identified according to the display position information.
5. The processing method according to any one of claims 1 to 4, wherein the step of acquiring the image to be recognized and extracting the target face information from the image to be recognized comprises:
acquiring an image to be recognized, recognizing a target face image in the image to be recognized, intercepting the target face image, and determining the target face image as target face information; or
the method comprises the steps of obtaining an image to be recognized, recognizing a target face image in the image to be recognized, extracting face characteristic point information in the target face image, and determining the face characteristic point information as target face information.
6. The processing method according to claim 5, wherein the step of acquiring the image to be recognized and recognizing the target face image in the image to be recognized comprises:
Analyzing an image to be recognized and determining a face image on the image to be recognized;
Judging whether the number of the face images is multiple or not;
When the number of the face images is judged to be multiple, prompt information is generated, the prompt information is used for prompting a user to select a target face image, and the target face image selected by the user according to the prompt information is received;
And when the number of the face images is judged not to be multiple, determining the face images as target face images.
7. The processing method according to any one of claims 1 to 4, wherein before the step of acquiring the image to be recognized and extracting the target face information from the image to be recognized, the method further comprises:
Collecting preset face information;
Acquiring a second user identifier associated with the local client;
And binding the preset face information and the second user identification, and sending the bound preset face information and the second user identification to a server.
8. the processing method according to claim 7, wherein after the step of displaying the parsed label information on the image to be recognized, further comprising:
Acquiring a received first user identifier and acquiring a second user identifier associated with a local client;
Judging whether the first user identification is consistent with the second user identification;
when the first user identification is judged to be consistent with the second user identification, the displayed label information is set to be in an editable state;
And when the first user identification is judged to be inconsistent with the second user identification, displaying a first control, wherein the first control is used for adding label information to the client associated with the first user identification.
9. the processing method according to claim 8, wherein after the step of displaying the first control, further comprising:
Judging whether the first user identification exists on the local client;
When the first user identification exists on the local client, generating a second control, wherein the second control is used for sending a message to the client associated with the first user identification;
and when the first user identification does not exist on the local client, generating a third control, wherein the third control is used for sending a friend adding request to the client associated with the first user identification.
10. An information processing apparatus characterized by comprising:
The extraction unit is used for acquiring an image to be recognized and extracting target face information from the image to be recognized;
The receiving and sending unit is used for sending the target face information to a server and receiving a first user identifier obtained by the server according to the target face information and label information associated with the first user identifier;
The analyzing unit is used for analyzing the label information;
and the display unit is used for displaying the analyzed label information on the image to be identified.
11. The information processing apparatus according to claim 10, wherein the parsing unit includes:
The analysis subunit is used for analyzing the label information and determining the number of the labels and the style configuration information corresponding to the labels;
And the analyzing subunit is used for analyzing the number of the labels and the style configuration information corresponding to the labels through the augmented reality control and determining the display position information corresponding to the labels.
12. The information processing apparatus according to claim 11, wherein the analysis subunit is specifically configured to:
analyzing the label information to obtain the number of labels corresponding to the label information, and obtain a display style parameter, a display color parameter, a display size parameter and a display font parameter corresponding to the labels;
The parsing subunit is specifically configured to: and analyzing the number of the labels and analyzing the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the labels through the augmented reality control, and determining the display position information corresponding to the labels according to the analysis result.
13. The information processing apparatus according to claim 12, wherein the display unit is specifically configured to:
Initializing and loading the label according to the display style parameter, the display color parameter, the display size parameter and the display font parameter corresponding to the label to obtain a target label;
And displaying the target label on the image to be identified according to the display position information.
14. A storage medium storing a plurality of instructions, the instructions being adapted to be loaded by a processor to perform the steps of the information processing method according to any one of claims 1 to 9.
15. An information processing system, the system comprising: a terminal and a server;
the terminal includes the information processing apparatus according to any one of claims 10 to 13;
the server is configured to: the method comprises the steps of receiving target face information sent by a terminal, matching according to the target face information to obtain a first user identification corresponding to the target face information and label information associated with the first user identification, and sending the first user identification and the label information associated with the first user identification to the terminal.
CN201810272957.XA 2018-03-29 2018-03-29 Information processing method, device, storage medium and system Pending CN110555171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810272957.XA CN110555171A (en) 2018-03-29 2018-03-29 Information processing method, device, storage medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810272957.XA CN110555171A (en) 2018-03-29 2018-03-29 Information processing method, device, storage medium and system

Publications (1)

Publication Number Publication Date
CN110555171A true CN110555171A (en) 2019-12-10

Family

ID=68733667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810272957.XA Pending CN110555171A (en) 2018-03-29 2018-03-29 Information processing method, device, storage medium and system

Country Status (1)

Country Link
CN (1) CN110555171A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111177499A (en) * 2019-12-27 2020-05-19 腾讯科技(深圳)有限公司 Label adding method and device and computer readable storage medium
CN111243023A (en) * 2020-01-14 2020-06-05 于金明 Quality control method and device based on virtual intelligent medical platform
CN112365281A (en) * 2020-10-28 2021-02-12 国网冀北电力有限公司计量中心 Power customer service demand analysis method and device
CN113038266A (en) * 2021-03-05 2021-06-25 青岛智动精工电子有限公司 Image processing method and device and electronic equipment
TWI810104B (en) * 2022-11-01 2023-07-21 南開科技大學 Interactive digital photo frame system with communication function and method thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102355534A (en) * 2011-11-01 2012-02-15 宇龙计算机通信科技(深圳)有限公司 Mobile terminal and contact information recommendation method
CN103076879A (en) * 2012-12-28 2013-05-01 中兴通讯股份有限公司 Multimedia interaction method and device based on face information, and terminal
CN103513890A (en) * 2012-06-28 2014-01-15 腾讯科技(深圳)有限公司 Method and device for interaction based on image and server
CN106484737A (en) * 2015-09-01 2017-03-08 腾讯科技(深圳)有限公司 A kind of network social intercourse method and network social intercourse device
CN106559317A (en) * 2015-09-30 2017-04-05 北京奇虎科技有限公司 A kind of method and apparatus that account information is sent based on instant messaging
CN107239725A (en) * 2016-03-29 2017-10-10 阿里巴巴集团控股有限公司 A kind of information displaying method, apparatus and system
CN107704626A (en) * 2017-10-30 2018-02-16 北京萌哥玛丽科技有限公司 A kind of control method and control device that user is searched based on recognition of face

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102355534A (en) * 2011-11-01 2012-02-15 宇龙计算机通信科技(深圳)有限公司 Mobile terminal and contact information recommendation method
CN103513890A (en) * 2012-06-28 2014-01-15 腾讯科技(深圳)有限公司 Method and device for interaction based on image and server
CN103076879A (en) * 2012-12-28 2013-05-01 中兴通讯股份有限公司 Multimedia interaction method and device based on face information, and terminal
CN106484737A (en) * 2015-09-01 2017-03-08 腾讯科技(深圳)有限公司 A kind of network social intercourse method and network social intercourse device
CN106559317A (en) * 2015-09-30 2017-04-05 北京奇虎科技有限公司 A kind of method and apparatus that account information is sent based on instant messaging
CN107239725A (en) * 2016-03-29 2017-10-10 阿里巴巴集团控股有限公司 A kind of information displaying method, apparatus and system
CN107704626A (en) * 2017-10-30 2018-02-16 北京萌哥玛丽科技有限公司 A kind of control method and control device that user is searched based on recognition of face

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111177499A (en) * 2019-12-27 2020-05-19 腾讯科技(深圳)有限公司 Label adding method and device and computer readable storage medium
CN111177499B (en) * 2019-12-27 2024-02-09 腾讯科技(深圳)有限公司 Label adding method and device and computer readable storage medium
CN111243023A (en) * 2020-01-14 2020-06-05 于金明 Quality control method and device based on virtual intelligent medical platform
CN111243023B (en) * 2020-01-14 2024-03-29 上海联影医疗科技股份有限公司 Quality control method and device based on virtual intelligent medical platform
CN112365281A (en) * 2020-10-28 2021-02-12 国网冀北电力有限公司计量中心 Power customer service demand analysis method and device
CN113038266A (en) * 2021-03-05 2021-06-25 青岛智动精工电子有限公司 Image processing method and device and electronic equipment
CN113038266B (en) * 2021-03-05 2023-02-24 青岛智动精工电子有限公司 Image processing method and device and electronic equipment
TWI810104B (en) * 2022-11-01 2023-07-21 南開科技大學 Interactive digital photo frame system with communication function and method thereof

Similar Documents

Publication Publication Date Title
CN108551519B (en) Information processing method, device, storage medium and system
CN110555171A (en) Information processing method, device, storage medium and system
CN108062390B (en) Method and device for recommending user and readable storage medium
CN108156508B (en) Barrage information processing method and device, mobile terminal, server and system
CN109003194B (en) Comment sharing method, terminal and storage medium
CN108022274B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN110674662A (en) Scanning method and terminal equipment
CN108628985B (en) Photo album processing method and mobile terminal
CN109388456B (en) Head portrait selection method and mobile terminal
CN109508399A (en) A kind of facial expression image processing method, mobile terminal
CN109495638B (en) Information display method and terminal
CN107743108B (en) Method and device for identifying medium access control address
CN108898040A (en) A kind of recognition methods and mobile terminal
CN107273024B (en) A kind of method and apparatus realized using data processing
CN109166164B (en) Expression picture generation method and terminal
CN108765522B (en) Dynamic image generation method and mobile terminal
CN108121583B (en) Screen capturing method and related product
CN110213444A (en) Display methods, device, mobile terminal and the storage medium of mobile terminal message
CN111931155A (en) Verification code input method, verification code input equipment and storage medium
CN109670105B (en) Searching method and mobile terminal
CN110825475A (en) Input method and electronic equipment
CN108595104B (en) File processing method and terminal
CN114691277A (en) Application program processing method, intelligent terminal and storage medium
CN114140655A (en) Image classification method and device, storage medium and electronic equipment
CN108255389B (en) Image editing method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40018732

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination