CN108777806B - User identity recognition method, device and storage medium - Google Patents

User identity recognition method, device and storage medium Download PDF

Info

Publication number
CN108777806B
CN108777806B CN201810542908.3A CN201810542908A CN108777806B CN 108777806 B CN108777806 B CN 108777806B CN 201810542908 A CN201810542908 A CN 201810542908A CN 108777806 B CN108777806 B CN 108777806B
Authority
CN
China
Prior art keywords
image
identity
text
sample
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810542908.3A
Other languages
Chinese (zh)
Other versions
CN108777806A (en
Inventor
张云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810542908.3A priority Critical patent/CN108777806B/en
Publication of CN108777806A publication Critical patent/CN108777806A/en
Application granted granted Critical
Publication of CN108777806B publication Critical patent/CN108777806B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • H04N21/2541Rights Management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25875Management of end-user data involving end-user authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Graphics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application discloses a user identity identification method, a user identity identification device and a storage medium. The method comprises the steps that text identity information and image identity information used by a target user in a first application are obtained from an image associated with the first application; matching the text identity information with a plurality of sample text identity information associated with the second application to obtain a text matching result, and matching the image identity information with a plurality of sample image identity information associated with the second application to obtain an image matching result; determining the identity of the target sample from the sample identity set corresponding to the second application based on the text matching result and the image matching result; the target sample identity is taken as the identity used by the target user in the second application. According to the scheme, the identity used by the user in the second application is linked to the identity used by the user in the first application in a picture and text composite retrieval mode according to the identity information used by the user in the first application, so that the identity linkage of a cross-application platform is realized, and the identity recognition speed is improved.

Description

User identity recognition method, device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for identifying a user identity, and a storage medium.
Background
Live webcasting is a new social networking mode, and is mainly divided into live game broadcasting, live show broadcasting and the like. As webcasting becomes more popular, more and more users register accounts with webcasting platforms, and are responsible for participating in a series of planning, recording, audience interaction and acting as a host in internet programs or activities through the registered accounts, and such users are generally called as anchor.
The network live broadcast platform also becomes a brand-new social media, and audiences can watch live broadcast contents on different communication platforms through a network system at the same time. For example, the viewer can enter the name or channel number of the anchor program and the room number in the live platform website where the anchor program is located, and search to enter the live broadcast room to watch the live game and live show.
Disclosure of Invention
The embodiment of the application provides a user identity identification method, a user identity identification device and a storage medium, and identity linkage of cross-application platforms can be realized.
The embodiment of the application provides a user identity identification method, which comprises the following steps:
acquiring text identity information and image identity information used by a target user in a first application from an image associated with the first application;
matching the text identity information with a plurality of sample text identity information associated with a second application to obtain a text matching result, and matching the image identity information with a plurality of sample image identity information associated with the second application to obtain an image matching result;
determining a target sample identity from a sample identity set corresponding to a second application based on the text matching result and the image matching result;
and taking the target sample identity as the identity used by the target user in the second application.
Correspondingly, the embodiment of the present application further provides a user identity recognition apparatus, including:
the acquisition unit is used for acquiring text identity information and image identity information used by a target user in the first application from the image associated with the first application;
the matching unit is used for matching the text identity information with a plurality of sample text identity information associated with the second application to obtain a text matching result, and matching the image identity information with a plurality of sample image identity information associated with the second application to obtain an image matching result;
the determining unit is used for determining the identity of a target sample from a sample identity set corresponding to a second application based on the text matching result and the image matching result;
a processing unit for taking the target sample identity as an identity used by the target user in a second application.
Correspondingly, an embodiment of the present application further provides a storage medium, where the storage medium stores instructions, and the instructions, when executed by a processor, implement the user identity identification method provided in any embodiment of the present application.
The method comprises the steps that text identity information and image identity information used by a target user in a first application are obtained from an image associated with the first application; matching the text identity information with a plurality of sample text identity information associated with the second application to obtain a text matching result, and matching the image identity information with a plurality of sample image identity information associated with the second application to obtain an image matching result; determining the identity of the target sample from the sample identity set corresponding to the second application based on the text matching result and the image matching result; the target sample identity is taken as the identity used by the target user in the second application. According to the scheme, the identity used by the user in the second application is linked to the identity used by the user in the first application in a picture and text composite retrieval mode according to the identity information used by the user in the first application, so that the identity linkage of a cross-application platform is realized, and the identity recognition speed is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of an information interaction system according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of a user identity recognition method according to an embodiment of the present application.
Fig. 3 is another schematic flow chart of a user identification method according to an embodiment of the present application.
Fig. 4 is a schematic view of an application scenario of the user identity identification method according to the embodiment of the present application.
Fig. 5 is a schematic view of another application scenario of the user identity identification method according to the embodiment of the present application.
Fig. 6 is a schematic view of still another application scenario of the user identity identification method according to the embodiment of the present application.
Fig. 7 is a schematic structural diagram of a user identification apparatus according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a user identification apparatus according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a user identification device according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides an information interaction system, which comprises any one of the user identity recognition devices provided by the embodiments of the application, wherein the user identity recognition device can be integrated in a terminal and other equipment, and the terminal can be a mobile phone, a tablet computer and the like. In addition, the system may also include other devices, for example, a server, a router, and the like.
Referring to fig. 1, an embodiment of the present application provides an information interaction system, including: terminal 11, terminal 12, server 13, and network 14. The terminal 11 may be connected to the server 13 through the network 14, and the terminal 12 may be connected to the server 13 through the network 14. The network 13 includes network entities such as routers and gateways, which are shown schematically in the figure. The terminals 11 and 12 may interact with the server 12 via a wired network or a wireless network, for example, an application (e.g., an instant messaging application) and/or an application update package and/or data information or service information related to the application may be downloaded from the server 12.
The terminal 11 may be a mobile phone, a tablet computer, a notebook computer, etc., and fig. 1 illustrates the terminal 11 as a notebook computer. The terminal 11 may be installed with various applications required by the user, such as an application with an entertainment function (e.g., an instant messaging application, a video playing application, a game application, a social entertainment application), and an application with a service function (e.g., a map navigation application, a group buying application, etc.). Similarly, the terminal 12 may be a mobile phone, a tablet computer, a notebook computer, etc., and fig. 1 illustrates the terminal 12 as a mobile phone. The terminal 11 may be installed with various applications required by the user, such as an application with an entertainment function (e.g., an instant messaging application, a video playing application, a game application, a social entertainment application), and an application with a service function (e.g., a map navigation application, a group buying application, etc.).
Based on the system shown in fig. 1, taking a social entertainment application as an example, the terminal 11 may download, from the server 13 through the network 14, a first social entertainment application update package, and/or data information or service information related to the first social entertainment application as needed, where the first social entertainment application may be used to live a video, such as an anchor book an account in the first social entertainment application, and play a live game in a live room. The terminal 12 may download from the server 13, via the network 14, a second social entertainment application that may be registered by the user for an account and create a game team associated with the account based on the registered account for centralizing other users to the game team for group play, strategic discussion, and resource welfare pickup, among other things. For example, the second social entertainment application may provide a welfare gift package for an individual game, and issue a welfare gift package for each game within a game team created by the user application, so that members within the game team may earn welfare after completing a group battle task.
With the embodiment of the present application, the home terminal user registers and logs in the account of the second social entertainment application in the terminal 12. The terminal 12 may grant the second social entertainment application a photographing right and call the system camera through a photographing interface built in the second social entertainment application to capture or scan a picture image of the anchor a in the terminal 11 when the game is live in the first social entertainment application. The captured image is a live interface diagram of the first social entertainment application and may include an avatar of anchor a, a nickname, a live room number, the number of fans, the number of users currently watching, a live game picture, and the like. The terminal 12 may perform image recognition on the captured image to obtain text identity information and image identity information used by the anchor a in the first social application. Then, matching may be performed from the sample text image information and the sample image identity information in the user information repository associated with the second social entertainment application based on the text identity information and the image identity information. And finally, determining a target user identity from the user identity set corresponding to the second application according to the matching result, wherein the target user identity is used as the identity of the anchor A in the second application, and therefore, the identity link of the cross-application platform is realized.
The above fig. 1 illustrates only one example of a system application for implementing the embodiment of the present application, and the embodiment of the present application is not limited to the system architecture shown in fig. 1, and various embodiments of the present application are proposed based on the system architecture.
In an embodiment, there is provided a user identification method, which may be executed by a processor of a terminal, as shown in fig. 2, the user identification method including:
201. and acquiring text identity information and image identity information used by the target user in the first application from the image associated with the first application.
In this embodiment of the application, the first application may be an application installed in the terminal and supporting the user to register an account and provide a social function, such as a live application, an instant messaging application, and the like. In addition, the first application can also be a webpage version live broadcast application, a webpage version instant communication application and the like which are integrated on a webpage for use.
The image associated with the first application may be an image containing user information in the first application. The image may be presented in various forms, for example, the image may be a user interface image in the first application, and the user interface image may be obtained by a terminal where the first application is located through a screen capture mode, or may be obtained by another terminal or other photographing equipment through a camera shooting an operation interface of the first application. The image may also be a paper image obtained by printing, for example. For another example, the second image may be an electronic image or a hand-drawn image drawn by copying the user interface of the first application. This is not a particular limitation of the present application.
In addition, the text identity information may be a nickname, a user name, a user mailbox, and the like of the target user in the account corresponding to the first application. The image identity information may be a user avatar of the target user in an account corresponding to the first application.
In the embodiment of the application, the content contained in the image can be acquired by identifying the image associated with the first application. Generally, two types of contents, text information and image information, can be included in an image. Therefore, the image content area and the character content area can be determined from the image based on the image content type, and then the corresponding areas are identified by adopting different algorithms based on the content type. That is, in some embodiments, the step "obtaining text identity information and image identity information used by the target user in the first application from the image associated with the first application" includes:
determining a content area to be identified from the image;
determining a content type of a content area, wherein the content type comprises a text type and an image type;
when the content type of the content area is a text type, identifying the content area based on a preset optical character recognition algorithm to obtain a text identification result, and taking the text identification result as text identity information;
and when the content type of the content area is an image class, extracting the image content in the content area and taking the image content as image identity information.
Specifically, the image may be subjected to page parsing, and a content area to be identified may be selected from the image. Then, the content type of the selected content area is judged, and then a corresponding recognition algorithm is adopted to respectively perform text recognition on the content area of the text type and perform image recognition on the content area of the image type. One or more content areas may be identified.
For example, when recognizing a content area of a text type, an Optical Character Recognition (OCR) technique may be used for text Recognition. Specifically, the optical character recognition control is operated to carry out preprocessing operations such as binarization and noise removal on the text content area, and the text content area is converted into a gray-scale image. And then performing layout analysis on the gray-scale image to obtain different paragraphs, performing character cutting operation on the content in the paragraphs, and performing character recognition on the cut characters. And further extracting character information in the text content area, converting the character information into a text format to obtain the character information converted into the text format as a text recognition result, thereby obtaining text identity information used by the user in the first application.
For example, when a content area of an image class is processed, image content in the content area may be directly intercepted as an image recognition result, so as to obtain image identity information used by a user in a first application.
In some embodiments, the step "determining the content area to be identified from the image" may comprise the following flow:
acquiring content layout information, and dividing the image into a plurality of content areas according to the content layout information;
a target content area is determined from the plurality of content areas as a content area to be identified.
In a specific implementation process, the image may be divided into a plurality of content areas according to the arrangement information of the text in the image and the distribution information of the image. For example, text in different rows or columns may be divided into different content regions, and unconnected images may be divided into different content regions.
In some examples, the step of "determining a target content area from a plurality of content areas" may include the following flow:
generating selection controls on a plurality of content areas, wherein the content areas correspond to the selection controls one to one;
triggering a selection instruction aiming at the content area through the selection control;
a target content area is determined from the plurality of content areas in response to the selection instruction.
Specifically, the content area to be identified may be selected from the plurality of content areas by a selection instruction of the user. When a user clicks any selection control, the terminal can respond to the click operation to call a corresponding application programming interface (API for short) to execute a content area selection instruction so as to select a content area at the position of the selection control.
It should be noted that, in the embodiment of the present application, the operation of identifying the text content area and the operation of identifying the image content area may be performed simultaneously, so as to improve the identification speed and the identification efficiency. For example, two threads may be invoked at the same time to perform an operation of identifying a text-type content area and an operation of identifying an image-type content area, respectively.
In some embodiments, the step of "obtaining text identity information and image identity information used by the target user in the first application" may include the following processes:
acquiring content layout information and content style information of an image;
selecting a preset image matched with the content layout and/or the content style from a preset image library according to the content layout information and the content style information;
determining position information of a preset text identity area and a preset image identity area in a preset image;
mapping a preset text identity area and a preset image identity area into the image based on the position information so as to determine the text identity area and the image identity area to be recognized in the image;
and performing text recognition on the text identity area, taking a recognition result as text identity information, extracting image content from the image identity area, and taking the image content as image identity information.
Specifically, the preset image library may include a plurality of preset images related to the first application and including user information, and a position of the text identification area and a position of the image identification area are identified in each of the preset images in advance. In specific implementation, the local content part information and the content style information are matched with the current image and the preset image in the preset image library, and the preset image with the maximum matching degree is selected. And then, correspondingly determining a text identity area and an image identity area to be recognized in the current image based on the position of the text identity area which is identified in advance in the preset image with the maximum matching degree and the position of the image identity area. And finally, identifying the determined text identity area and the determined image identity area.
The content layout information may be information such as text content in the image, and layout and position of the image content; the content style information may be information such as the font size, font style, etc. of the text content in the image, and information such as the outline, size, etc. of the image content.
202. And matching the text identity information with the plurality of sample text identity information associated with the second application to obtain a text matching result, and matching the image identity information with the plurality of sample image identity information associated with the second application to obtain an image matching result.
The second application may be an application that is installed in the terminal and supports the user to register an account and provide a social function, such as a live application, a game community application, and the like. In addition, a web-version live broadcast application, a web-version game community application, and the like, which are used on a web page, are integrated.
In the embodiment of the application, the user identity information (such as the user avatar, the user nickname, the user name, the mailbox, and other information) in the second application needs to be added and stored in a storage. Specifically, a plurality of sample text identity information associated with the second application, that is, a set of text identity information of all users registered in the second application; the second application is associated with a plurality of sample image identity information, i.e., a set of image identity information for all accounts registered in the second application. That is, the plurality of sample text identity information and the plurality of sample image identity information constitute a user information base of the account registered in the second application. The sample text identity information may be a combination of one or more characters and the sample image identity information may be an image.
In the embodiment of the present application, there may be various ways to perform text identity information matching and image identity information matching.
In some embodiments, the step of matching the text identity information with a plurality of sample text identity information associated with the second application to obtain the text matching result may include the following processes:
matching characters included in the text identity information with sample characters included in each sample text identity information;
determining character sorting difference, and calculating sorting similarity according to the character sorting difference;
and generating a text matching result according to the sequencing similarity.
For example, if the character sequence included in the text identity information is "OPQRST", the character sequence included in the sample text identity information is "OPQRST", and the six character sequences are the same, the sequence similarity can be calculated to be 100%. For another example, the character sequence included in the text identity information is "OPQRST", the character sequence included in the sample text identity information is "OPQTSR", it can be seen that the first three characters have the same sequence, the fifth character has the same sequence, four characters having the same sequence are obtained, and two characters having different sequences can be obtained, so that the sequence similarity can be calculated to be 66.7%.
When the text matching result is generated according to the sequencing similarity, the sequencing similarity can be directly used as the text matching result; the sample text identity information with the sequencing similarity exceeding a preset threshold value can be used as a matching result; the identity information of the sample text with the largest sequencing similarity can be used as a matching result; whether the judgment information with the sorting similarity larger than the preset threshold exists or not can be used as the matching result. The presentation form of the text matching result can be specifically designed according to the actual implementation logic.
In addition, in some embodiments, the similarity between the text identity information and the sample text identity information may also be calculated by the number of matched characters and the number of unmatched characters.
In some embodiments, the step of "matching the image identity information with a plurality of sample image identity information associated with the second application to obtain an image matching result" may include the following processes:
extracting image characteristics of the images in the image identity information and extracting sample image characteristics of the sample images in each sample image identity information;
calculating the feature matching degree of the image features and the sample image features based on a preset image matching algorithm;
and generating an image matching result according to the feature matching degree.
Specifically, the image features of the image in the image identity information and the sample image features of the sample image in the sample image identity information may be extracted through a related image processing algorithm. Then, the extracted image features are subjected to feature matching with the sample image features of each sample image, and a feature matching degree is calculated.
Similarly, when the image matching result is generated according to the feature matching degree, the feature matching degree can be directly used as the image matching result; the identity information of the sample image with the characteristic matching degree exceeding a preset threshold value can be used as a matching result; the identity information of the sample image with the maximum feature matching degree can be used as a matching result; and judging whether the characteristic matching degree is larger than a preset threshold value or not can be used as a matching result. The presentation form of the image matching result can be specifically designed according to the actual implementation logic.
203. And determining the target sample identity from the sample identity set corresponding to the second application based on the text matching result and the image matching result.
The sample identity set corresponding to the second application may include a plurality of sample identities, and the sample identity set includes all user identity information of accounts registered in the second application. The sample identity may specifically be identification information used to represent an account registered by the user in the second application, for example, the account identification may be an account number, a user name, a mailbox, or the like, or may be an identity card number or a mobile phone number of the user.
In some embodiments, the text matching results include: whether matched sample text identity information exists; the image matching result comprises: whether there is matching sample image identity information. Then, the step "determining the target sample identity from the sample identity set corresponding to the second application based on the text matching result and the image matching result" may include the following processes:
when the matched sample text identity information and the matched sample image identity information exist at the same time, determining the identity of the target sample from the sample identity set corresponding to the second application based on the matched sample text identity information and the matched sample image identity information;
when the matched sample text identity information exists and the matched sample image identity information does not exist, determining the identity of the target sample from the sample identity set corresponding to the second application based on the matched sample text identity information;
and when the matched image text identity information exists and the matched sample text identity information does not exist, determining the identity of the target sample from the sample identity set corresponding to the second application based on the matched sample image identity information.
Specifically, when one of the text matching result and the image matching result satisfies a condition, the target sample identity is determined from the sample identity set corresponding to the second application based on the satisfied condition. And when the text matching result and the image matching result degree meet the conditions, determining the identity of the target sample from the sample identity set corresponding to the second application based on the two met conditions.
The sample image identity information may be information such as a nickname, a user name, and a mailbox of the user, and the sample image identity information may be a head portrait of the user.
In some embodiments, the step "determining the target sample identity from the sample identity set corresponding to the second application based on the matched sample text identity information and the matched sample image identity information" may include the following processes:
selecting a sample identity associated with the matched sample text identity information from a sample identity set corresponding to the second application to obtain a first sample identity subset;
selecting a sample identity associated with the matched sample image identity information from the sample user identity set to obtain a second sample identity subset;
and screening the same sample identity from the first sample identity subset and the second sample identity subset as the target sample identity.
In the embodiment of the application, a mapping relationship between the sample text identity information and the sample identity and a mapping relationship between the image identity information and the sample identity need to be established in advance to provide a selection basis for determining the target sample identity.
In some embodiments, the sample text identity information may be user information corresponding to the sample identity. For example, a sample identity user account, the sample text information may be a user nickname currently used by the user. Therefore, all sample identities (i.e., user accounts) with the nicknames of the users being the matched sample identity information are obtained from the sample identity set, so as to obtain a first sample identity subset corresponding to the text identity information.
In some embodiments, the image in the sample image identity information may be user information corresponding to the sample identity. For example, if the sample identity is a user account, the sample image identity information may be a user avatar currently used by the user. Therefore, all the sample identities (i.e., user accounts) of which the user avatars are the images are obtained from the sample identity set, so as to obtain a second sample identity subset corresponding to the image identity information.
And finally, screening the same sample identity from the first sample identity subset and the second sample identity subset as the target sample identity.
204. The target sample identity is taken as the identity used by the target user in the second application.
In this embodiment of the application, the target sample identity obtained from the sample identity set corresponding to the second application based on the text identity information and the image identity information used by the target user in the first application may be directly used as the identity used by the target user in the second application. Thereby realizing the identity link across the application platforms.
In some embodiments, after linking to the identity of the target user used in the second application, the user information interface of the identity of the target user used in the second application may be popped out of the image recognition interface where the current second application is located, and may be highlighted (e.g., display brightness is increased). The user information interface may present a corresponding interface according to an actual design situation of the user information interface of the second application, for example, the user information interface may display a label with a user avatar, a user nickname, an account number level, and other related information. In practical application, each piece of information or each label is correspondingly provided with a link control, the link control can be triggered to jump to a related interface of an account used by the target user in the second application, and the related information is displayed for the user of the current terminal.
As can be seen from the above, in the embodiment of the application, the text identity information and the image identity information used by the target user in the first application are obtained from the image associated with the first application; matching the text identity information with a plurality of sample text identity information associated with the second application to obtain a text matching result, and matching the image identity information with a plurality of sample image identity information associated with the second application to obtain an image matching result; determining the identity of the target sample from the sample identity set corresponding to the second application based on the text matching result and the image matching result; the target sample identity is taken as the identity used by the target user in the second application. According to the scheme, the identity used by the user in the second application is linked to the identity used by the user in the first application in a picture and text composite retrieval mode according to the identity information used by the user in the first application, so that the identity linkage of a cross-application platform is realized, and the identity recognition speed is improved.
In an embodiment, there is provided a user identification method, which may be executed by a processor of a terminal, and referring to fig. 3 to 6, the user identification method includes:
301. and the home terminal user logs in the registered account in the second application through the first terminal.
The second application may be an application that supports user registration of an account and provides social functionality, such as a live application, a game community application, and the like. In addition, the second application can also be a webpage version live broadcast application, a webpage version game community application and the like. In this embodiment, the second application will be described as an example of a game community application.
302. The first terminal starts a camera to obtain an interface image of the first application displayed on the second terminal, wherein the interface image comprises identity information used by a target user in the first application.
Specifically, a photographing control can be set in the second application, and the system camera is triggered and called to acquire the interface image of the first application displayed on the second terminal by clicking the photographing control.
The first application may be an application that supports user registration of an account and provides a social function, such as a live application, an instant messaging application, and the like. In addition, the first application can also be a webpage version which is a live application and instant messaging. In this embodiment, the first application will be described as a live application.
For example, if the first application is a live application, the interface image may be a live interface image of a main broadcast (i.e., a target user) live in a live room. Referring to fig. 4, the interface image may include: the name of the anchor (namely 'little Y anchor'), the label (namely 'hot game', 'xx game', 'single-row upper score' and the like), the grade (namely 'grade: 13'), the head portrait, the game picture which is currently live and the like.
303. The first terminal determines a text content area and an image content area from the interface image based on the content type.
Specifically, the image is subjected to page parsing, and a text content region and an image content region are extracted from the image based on the text content type and the image content type.
304. And the first terminal performs text recognition on the text content area to obtain a text recognition result.
Specifically, referring to fig. 5, in text recognition, the text content area may be divided into a plurality of text content sub-areas (i.e., a1, a2, a3 in fig. 5) according to the arrangement information of the text in the image. For example, text in different rows or columns may be divided into different text content sub-regions. In the multiple text content sub-regions, in order to avoid resource waste caused by useless text recognition, a target text content sub-region needing to be recognized can be selected from the multiple text content sub-regions for recognition, and the recognized result is used as a text recognition result.
For example, a selection control may be set for each text content sub-region obtained by division, and the text content sub-regions correspond to the selection controls one to one. The user can trigger a selection instruction aiming at the text content subregion through the selection control, the terminal responds to the selection instruction and adopts the optical character recognition technology to recognize the target text content subregion so as to obtain a text recognition result. When a target text content sub-region is selected, the text content sub-region may be highlighted or outlined.
305. The first terminal carries out image recognition on the image content area to obtain an image recognition result.
With continued reference to fig. 5, likewise, the image content region may be divided into a plurality of image content sub-regions (i.e., b1, b2, b3 in fig. 5) according to the distribution information of the image. For example, unconnected images may be divided into different image content sub-regions. In order to avoid resource waste caused by useless image recognition, the target image content subarea needing to be recognized can be selected from the image content subareas for recognition, and the recognized result is used as the image recognition result.
For example, a selection control may be set for each of the image content sub-regions obtained by division, and the image content sub-regions correspond to the selection controls one to one. The selection control may specifically be a selection control for triggering acquisition of the image content sub-region. The user can trigger a selection instruction aiming at the image content subregion through the selection control, and the terminal responds to the selection instruction to identify the target image content subregion so as to obtain an image identification result. When a target image content sub-region is selected, the image content sub-region may be highlighted or outlined.
It should be noted that, in the embodiment of the present application, the operation of text recognition and the operation of image recognition may be performed simultaneously, so as to improve the recognition speed and the recognition efficiency. For example, two threads may be invoked simultaneously to perform an operation of identifying a text content area and an operation of identifying an image content area, respectively.
306. The first terminal takes the text recognition result as text identity information used by the target user in the first application and takes the image recognition result as image identity information used in the first application.
In some embodiments, the text recognition result may be user nickname information and the image recognition result may be user avatar information. That is, the nickname information of the user is used as the text identity information, and the head portrait of the user is used as the image identity information.
307. Judging whether sample text identity information matched with the text identity information exists in a plurality of sample text identity information associated with the second application; if yes, go to step 309, otherwise go to step 304.
In the embodiment of the application, the user identity information (such as the user avatar, the user nickname, the user name, the mailbox, and other information) in the second application needs to be added and stored in a storage. Specifically, the second application is associated with a plurality of sample text identity information, that is, a set of text identity information of all users registered in the second application. The sample textual identity information may be a combination of one or more characters.
308. Judging whether sample image identity information matched with the image identity information exists in a plurality of sample image identity information associated with the second application; if yes, go to step 309, otherwise go to step 305.
Similarly, the plurality of sample image identity information associated with the second application, that is, the set of image identity information of all accounts registered in the second application. The sample image identity information may be an image.
309. And the first terminal acquires the text matching result and the image matching result, determines the target sample identity from the sample identity set corresponding to the second application, and takes the target sample identity as an account used by the target user in the second application.
Specifically, the text matching result may be matched sample text identity information; the image matching result may be matching sample image identity information. In specific implementation, the sample identity associated with the matched sample text identity information can be selected from the sample identity set corresponding to the second application to obtain a first sample identity subset; selecting a sample identity associated with the matched sample image identity information from the sample user identity set to obtain a second sample identity subset; and screening the same sample identity from the first sample identity subset and the second sample identity subset to serve as the account used by the target user in the second application.
310. And the first terminal displays a user information interface corresponding to the target identity on an application interface of the current second application.
Specifically, after linking to the account used by the target user in the second application, the user information interface of the account used by the target user in the second application may be popped out from the image recognition interface where the current second application is located, and may be highlighted (for example, referring to fig. 6, the display brightness of the user information interface is higher than that of the imported image). The user information interface may present a corresponding interface according to an actual design situation of the user information interface of the second application, for example, referring to fig. 6, and the user information interface may display tags of a user avatar, a user nickname, an account number level, and other related information. In practical application, each piece of information or each label is correspondingly provided with a link control, the link control can be triggered to jump to a related interface of an account used by the target user in the second application, and the related information is displayed to a user (namely a home terminal user) of the current terminal.
Therefore, according to the scheme provided by the embodiment of the application, the identity used by the user in the second application is linked by adopting a picture-text composite retrieval mode according to the identity information used by the user in the first application, so that the identity link across application platforms is realized, and the identity recognition speed is improved.
With continuing reference to fig. 4 to fig. 6, the cross-application identity linking scheme in the embodiment of the present application will be described below by taking a live APP1 and a game community APP2 as examples.
For example, a live APP1 and a game community APP2 are installed in a cell phone at the same time, and both of the applications are currently in an online state (i.e., a user has logged in an account in the live APP1 and an account in the game community APP 2). If the current user is watching a "small Y anchor" game live broadcast in a live broadcast room using the live broadcast application APP1, the watched picture can refer to the image shown in fig. 4, which includes anchor information and the current game picture.
At this time, if the user is interested in the game on which "little Y anchor" is live, and wants to know whether or not "little Y anchor" creates a game team in the game community application APP2, and wants to join the game team created by "little Y anchor". Then, the user can use the screen capture function of the mobile phone itself to capture the current live interface image and store the image in the local album. Then, the game community APP2 is opened, wherein the game community APP2 is provided with an image import interface through which the live interface image stored in the local album is imported into the game community APP 2. And then, guiding the user to click an anchor head portrait area and an anchor nickname area in the live broadcast interface image, and automatically framing the area where the anchor head portrait is located and the area where the anchor nickname is located by the mobile phone according to the click operation of the user. And then, the background of the mobile phone extracts image information of the area where the framed head portrait is located to obtain image content, and performs text recognition and extraction on the area where the nickname selected by the frame is located to obtain text content. Then, the mobile phone background matches the obtained image content with a plurality of user head portraits in the user information base of the game community application APP2, and matches the obtained text content with a plurality of user nicknames in the user information base of the game community application APP 2. When the same user avatar (i.e., the avatar shown in fig. 4) and the same user nickname (i.e., "little Y anchor") are matched, a target account with the avatar as shown in fig. 4 and the nickname as "little Y anchor" is acquired from among the plurality of accounts registered in the game community application APP2, and the target account is used as an account registered for use by the game community application APP2 for little Y anchor.
Next, based on the acquired account, the related interface component may be called to generate and display user information called "little Y anchor" on the current interface of the game community application APP2 as a user nickname. For example, referring to FIG. 6, the user may include information such as basic user information (e.g., avatar, nickname, account level, etc.) hosted by the Small Y in the gaming community application APP2, and created gaming team information (e.g., team number "666666", fleet member information "Member 100, number of people online 66"), recently played gaming information (e.g., displayed game icon), and fleet join control (i.e., "join fleet" in FIG. 6).
Finally, the user can trigger to send a motorcade joining request to the server through the motorcade joining control, wherein the motorcade joining request comprises the number of the motorcade and the account identification of the home terminal user, so that the server adds the account identification of the home terminal user to the motorcade according to the motorcade joining request. Therefore, according to the identity information of the anchor in the live broadcast application APP1, the identity of the anchor in the game community application APP2 can be quickly linked to join the game team created by the anchor.
In order to better implement the group joining user identification method provided by the embodiment of the application, a user identification device is further provided in an embodiment. The meaning of the noun is the same as that in the user identification method, and specific implementation details can refer to the description in the method embodiment.
In an embodiment, there is also provided a user identification apparatus, as shown in fig. 7, the user identification apparatus may include: the acquisition unit 401, the matching unit 402, the determination unit 403, and the processing unit 404 are as follows:
an obtaining unit 401, configured to obtain, from an image associated with a first application, text identity information and image identity information used by a target user in the first application;
a matching unit 402, configured to match the text identity information with multiple sample text identity information associated with a second application to obtain a text matching result, and match the image identity information with multiple sample image identity information associated with the second application to obtain an image matching result;
a determining unit 403, configured to determine, based on the text matching result and the image matching result, a target sample identity from a sample identity set corresponding to the second application;
a processing unit 404, configured to use the target sample identity as an identity used by the target user in a second application.
In some embodiments, referring to fig. 8, the acquisition unit 401 includes a region determination sub-unit 4011, a type determination sub-unit 4012, a text recognition sub-unit 4013, and an image processing sub-unit 4014 as follows:
a region determining sub-unit 4011 configured to determine a content region to be identified from the image;
a type determining subunit 4012, configured to determine a content type of the content area, where the content type includes a text class and an image class;
the text recognition sub-unit 4013 is configured to, when the content type of the content area is a text type, recognize the content area based on a preset optical character recognition algorithm to obtain a text recognition result, and use the text recognition result as the text identity information;
and the image processing sub-unit 4014 is configured to, when the content type of the content area is an image class, extract image content in the content area, and use the image content as the image identity information.
In some embodiments, the region determination sub-unit 4011 is further operable to: acquiring content layout information, and dividing the image into a plurality of content areas according to the content layout information; a target content area is determined from the plurality of content areas as a content area to be identified.
In some embodiments, referring to fig. 9, the obtaining unit 401 may include an obtaining sub-unit 4015, a selecting sub-unit 4016, a position determining sub-unit 4017, a mapping sub-unit 4018, and an information processing sub-unit 4019, as follows:
an acquisition sub-unit 4015 configured to acquire content layout information and content style information of the image;
the selecting sub-unit 4016 is configured to select a preset image with a matched content layout and/or a matched content style from a preset image library according to the content layout information and the content style information;
the position determining subunit 4017 is configured to determine position information of the text identity area and the preset image identity area in the preset image;
a mapping sub-unit 4018, configured to map the preset text identity region and the preset image identity region into the image based on the location information, so as to determine a text identity region and an image identity region to be recognized in the image;
the information processing sub-unit 4019 is configured to perform text recognition on the text identity area, use a recognition result as the text identity information, extract image content from the image identity area, and use the image content as the image identity information.
In some embodiments, when matching the textual identity information with a plurality of sample textual identity information associated with a second application, the matching unit 402 may be further configured to:
matching characters included in the text identity information with sample characters included in each sample text identity information;
determining character sorting difference, and calculating sorting similarity according to the character sorting difference;
and generating the text matching result according to the sequencing similarity.
In some embodiments, in matching the image identity information with a plurality of sample image identity information associated with a second application, the matching unit 402 may be further configured to:
extracting image features of images in the image identity information and extracting sample image features of sample images in each sample image identity information;
calculating the feature matching degree of the image features and the sample image features based on a preset image matching algorithm;
and generating the image matching result according to the feature matching degree.
In some embodiments, the text matching results include: whether matched sample text identity information exists; the image matching result includes: whether matched sample image identity information exists or not; the determination unit 403 may further be configured to:
when the matched sample text identity information and the matched sample image identity information exist at the same time, determining the identity of the target sample from the sample identity set corresponding to the second application based on the matched sample text identity information and the matched sample image identity information;
when the matched sample text identity information exists and the matched sample image identity information does not exist, determining the identity of the target sample from the sample identity set corresponding to the second application based on the matched sample text identity information;
and when the matched image text identity information exists and the matched sample text identity information does not exist, determining the identity of the target sample from the sample identity set corresponding to the second application based on the matched sample image identity information.
As can be seen from the above, in the user identity recognition apparatus in the embodiment of the present application, the obtaining unit 401 obtains the text identity information and the image identity information used by the target user in the first application from the image associated with the first application; the matching unit 402 matches the text identity information with a plurality of sample text identity information associated with the second application to obtain a text matching result, and matches the image identity information with a plurality of sample image identity information associated with the second application to obtain an image matching result; the determining unit 403 determines the target sample identity from the sample identity set corresponding to the second application based on the text matching result and the image matching result; the processing unit 404 takes the target sample identity as the identity used by the target user in the second application. According to the scheme, the identity used by the user in the second application is linked to the identity used by the user in the first application in a picture and text composite retrieval mode according to the identity information used by the user in the first application, so that the identity linkage of a cross-application platform is realized, and the identity recognition speed is improved.
Referring to fig. 10, the present embodiment provides a terminal 500, which may include one or more processors 501 of a processing core, one or more memories 502 of a computer-readable storage medium, a Radio Frequency (RF) circuit 503, a power supply 504, an input unit 505, and a display unit 506. Those skilled in the art will appreciate that the terminal structure shown in fig. 10 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 501 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by running or executing software programs and/or modules stored in the memory 502 and calling data stored in the memory 502, thereby performing overall monitoring of the terminal. Optionally, processor 501 may include one or more processing cores; preferably, the processor 501 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 501.
The memory 502 may be used to store software programs and modules, and the processor 501 executes various functional applications and data processing by operating the software programs and modules stored in the memory 502.
The RF circuit 503 may be used for receiving and transmitting signals during the process of transmitting and receiving information.
The terminal also includes a power supply 504 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 501 via a power management system to manage charging, discharging, and power consumption management functions via the power management system.
The terminal may further include an input unit 505, and the input unit 505 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
The terminal may further include a display unit 506, and the display unit 506 may be used to display information input by the user or information provided to the user and various graphical user interfaces of the terminal, which may be configured by graphics, text, icons, video, and any combination thereof. Specifically, in this embodiment, the processor 501 in the terminal loads the executable file corresponding to the process of one or more application programs into the memory 502 according to the following instructions, and the processor 501 runs the application programs stored in the memory 502, so as to implement various functions as follows:
acquiring text identity information and image identity information used by a target user in a first application from an image associated with the first application;
matching the text identity information with a plurality of sample text identity information associated with a second application to obtain a text matching result, and matching the image identity information with a plurality of sample image identity information associated with the second application to obtain an image matching result;
determining a target sample identity from a sample identity set corresponding to a second application based on the text matching result and the image matching result;
and taking the target sample identity as the identity used by the target user in the second application.
Therefore, the terminal provided by the embodiment of the application can be linked to the identity of the user used in the second application by adopting a picture and text composite retrieval mode according to the identity information of the user used in the first application, so that the identity link across application platforms is realized, and the identity recognition speed is improved.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present invention provides a storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in any one of the user identification methods provided by the embodiments of the present invention. For example, the instructions may perform the steps of:
acquiring text identity information and image identity information used by a target user in a first application from an image associated with the first application;
matching the text identity information with a plurality of sample text identity information associated with a second application to obtain a text matching result, and matching the image identity information with a plurality of sample image identity information associated with the second application to obtain an image matching result;
determining a target sample identity from a sample identity set corresponding to a second application based on the text matching result and the image matching result;
and taking the target sample identity as the identity used by the target user in the second application.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any user identification method provided in the embodiments of the present invention, the beneficial effects that can be achieved by any user identification method provided in the embodiments of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The method, the apparatus, and the storage medium for user identity recognition provided by the embodiments of the present application are described in detail above, and specific examples are applied in the present application to explain the principles and implementations of the present application, and the descriptions of the above embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (12)

1. A user identity recognition method is characterized by comprising the following steps:
generating selection controls on a plurality of content areas of the image associated with the first application, wherein the content areas correspond to the selection controls one to one;
triggering a selection instruction aiming at the content area through the selection control;
determining a content area to be identified from a plurality of content areas in response to the selection instruction, and highlighting the content area to be identified;
acquiring text identity information and image identity information used by a target user in a first application from the content area to be identified;
matching the text identity information with a plurality of sample text identity information associated with a second application to obtain a text matching result, and matching the image identity information with a plurality of sample image identity information associated with the second application to obtain an image matching result;
determining a target sample identity from a sample identity set corresponding to a second application based on the text matching result and the image matching result;
and taking the target sample identity as the identity used by the target user in the second application.
2. The method for identifying the user identity according to claim 1, wherein the obtaining of the text identity information and the image identity information used by the target user in the first application from the image associated with the first application comprises:
determining a content type of the content area, wherein the content type comprises a text class and an image class;
when the content type of the content area is a text type, identifying the content area based on a preset optical character recognition algorithm to obtain a text identification result, and taking the text identification result as the text identity information;
and when the content type of the content area is an image type, extracting the image content in the content area, and taking the image content as the image identity information.
3. The method according to claim 1, wherein the determining the content area to be identified from the image comprises:
acquiring content layout information, and dividing the image into a plurality of content areas according to the content layout information;
a target content area is determined from the plurality of content areas as a content area to be identified.
4. The method for identifying the user identity according to claim 1, wherein the acquiring the text identity information and the image identity information used by the target user in the first application comprises:
acquiring content layout information and content style information of the image;
selecting a preset image matched with the content layout and/or the content style from a preset image library according to the content layout information and the content style information;
determining position information of a preset text identity area and a preset image identity area in the preset image;
mapping the preset text identity region and the preset image identity region into the image based on the position information so as to determine a text identity region and an image identity region to be recognized in the image;
and performing text recognition on the text identity area, taking a recognition result as the text identity information, extracting image content from the image identity area, and taking the image content as the image identity information.
5. The method of claim 1, wherein the matching the textual identity information with a plurality of sample textual identity information associated with a second application to obtain a textual matching result comprises:
matching characters included in the text identity information with sample characters included in each sample text identity information;
determining character sorting difference, and calculating sorting similarity according to the character sorting difference;
and generating the text matching result according to the sequencing similarity.
6. The method of claim 1, wherein the matching the image identity information with a plurality of sample image identity information associated with a second application to obtain an image matching result comprises:
extracting image features of images in the image identity information and extracting sample image features of sample images in each sample image identity information;
calculating the feature matching degree of the image features and the sample image features based on a preset image matching algorithm;
and generating the image matching result according to the feature matching degree.
7. The method according to claim 1, wherein the text matching result comprises: whether matched sample text identity information exists; the image matching result comprises: whether matched sample image identity information exists or not;
the determining a target sample identity from a sample identity set corresponding to a second application based on the text matching result and the image matching result includes:
when the matched sample text identity information and the matched sample image identity information exist at the same time, determining the identity of the target sample from the sample identity set corresponding to the second application based on the matched sample text identity information and the matched sample image identity information;
when the matched sample text identity information exists and the matched sample image identity information does not exist, determining the identity of the target sample from the sample identity set corresponding to the second application based on the matched sample text identity information;
and when the matched image text identity information exists and the matched sample text identity information does not exist, determining the identity of the target sample from the sample identity set corresponding to the second application based on the matched sample image identity information.
8. The method of claim 7, wherein the determining the target sample identity from the sample identity set corresponding to the second application based on the matched sample text identity information and the matched sample image identity information comprises:
selecting a sample identity associated with the matched sample text identity information from a sample identity set corresponding to the second application to obtain a first sample identity subset;
selecting a sample identity associated with the matched sample image identity information from the sample user identity set to obtain a second sample identity subset;
and screening the same sample identity from the first sample identity subset and the second sample identity subset as the target sample identity.
9. A user identification apparatus, comprising:
the device comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for generating selection controls on a plurality of content areas of an image associated with a first application, and the content areas correspond to the selection controls one to one; triggering a selection instruction aiming at the content area through the selection control; determining a content area to be identified from a plurality of content areas in response to the selection instruction, and highlighting the content area to be identified; acquiring text identity information and image identity information used by a target user in a first application from the content area to be identified;
the matching unit is used for matching the text identity information with a plurality of sample text identity information associated with the second application to obtain a text matching result, and matching the image identity information with a plurality of sample image identity information associated with the second application to obtain an image matching result;
the determining unit is used for determining the identity of a target sample from a sample identity set corresponding to a second application based on the text matching result and the image matching result;
a processing unit for taking the target sample identity as an identity used by the target user in a second application.
10. The apparatus of claim 9, wherein the obtaining unit comprises:
a region determining subunit, configured to determine a content region to be identified from the image;
the type determining subunit is used for determining the content type of the content area, wherein the content type comprises a text type and an image type;
the text recognition subunit is used for recognizing the content area based on a preset optical character recognition algorithm to obtain a text recognition result when the content type of the content area is a text type, and taking the text recognition result as the text identity information;
and the image processing subunit is used for extracting the image content in the content area and using the image content as the image identity information when the content type of the content area is an image type.
11. The apparatus of claim 9, wherein the obtaining unit comprises:
an acquisition subunit configured to acquire content layout information and content style information of the image;
the selecting subunit is used for selecting a preset image with matched content layout and/or matched content style from a preset image library according to the content layout information and the content style information;
the position determining subunit is used for determining the position information of the text identity area and the preset image identity area in the preset image;
the mapping subunit is configured to map the preset text identity region and the preset image identity region into the image based on the location information, so as to determine a text identity region and an image identity region to be identified in the image;
and the information processing subunit is used for performing text recognition on the text identity area, taking a recognition result as the text identity information, extracting image content from the image identity area, and taking the image content as the image identity information.
12. A storage medium storing instructions which, when executed by a processor, implement the method of user identification according to any one of claims 1 to 8.
CN201810542908.3A 2018-05-30 2018-05-30 User identity recognition method, device and storage medium Active CN108777806B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810542908.3A CN108777806B (en) 2018-05-30 2018-05-30 User identity recognition method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810542908.3A CN108777806B (en) 2018-05-30 2018-05-30 User identity recognition method, device and storage medium

Publications (2)

Publication Number Publication Date
CN108777806A CN108777806A (en) 2018-11-09
CN108777806B true CN108777806B (en) 2021-11-02

Family

ID=64028125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810542908.3A Active CN108777806B (en) 2018-05-30 2018-05-30 User identity recognition method, device and storage medium

Country Status (1)

Country Link
CN (1) CN108777806B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753961A (en) * 2018-12-26 2019-05-14 国网新疆电力有限公司乌鲁木齐供电公司 A kind of substation's spacer units unlocking method and system based on image recognition
CN110896490B (en) * 2019-12-06 2023-03-07 网易(杭州)网络有限公司 Identity display method, device and equipment and readable storage medium
CN111586427B (en) * 2020-04-30 2022-04-12 广州方硅信息技术有限公司 Anchor identification method and device for live broadcast platform, electronic equipment and storage medium
CN111767438A (en) * 2020-06-16 2020-10-13 上海同犀智能科技有限公司 Identity recognition method based on Hash combined integral
CN115587262B (en) * 2022-12-12 2023-03-21 中国人民解放军国防科技大学 User identity correlation method based on semantic enhancement

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102270296A (en) * 2011-07-05 2011-12-07 上海合合信息科技发展有限公司 Business card information exchanging method based on character recognition and image matching
CN103646123A (en) * 2013-12-27 2014-03-19 珠海市魅族科技有限公司 Data matching method and terminal
WO2017077854A1 (en) * 2015-11-06 2017-05-11 Canon Kabushiki Kaisha Information processing apparatus, method, and medium
CN107465718A (en) * 2017-06-20 2017-12-12 晶赞广告(上海)有限公司 Across the ID recognition methods of application and device, storage medium, terminal
CN107608583A (en) * 2017-09-01 2018-01-19 广东欧珀移动通信有限公司 Using an exchange method, device, mobile terminal and computer-readable recording medium
CN107920138A (en) * 2016-10-08 2018-04-17 腾讯科技(深圳)有限公司 A kind of user's unifying identifier generation method, apparatus and system
CN107959757A (en) * 2017-12-11 2018-04-24 北京小米移动软件有限公司 User information processing method, device, APP servers and terminal device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7787158B2 (en) * 2005-02-01 2010-08-31 Canon Kabushiki Kaisha Data processing apparatus, image processing apparatus, data processing method, image processing method, and programs for implementing the methods
EP2320390A1 (en) * 2009-11-10 2011-05-11 Icar Vision Systems, SL Method and system for reading and validation of identity documents
US8515185B2 (en) * 2009-11-25 2013-08-20 Google Inc. On-screen guideline-based selective text recognition
CN104156694B (en) * 2014-07-18 2019-03-19 百度在线网络技术(北京)有限公司 A kind of method and apparatus of target object in image for identification

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102270296A (en) * 2011-07-05 2011-12-07 上海合合信息科技发展有限公司 Business card information exchanging method based on character recognition and image matching
CN103646123A (en) * 2013-12-27 2014-03-19 珠海市魅族科技有限公司 Data matching method and terminal
WO2017077854A1 (en) * 2015-11-06 2017-05-11 Canon Kabushiki Kaisha Information processing apparatus, method, and medium
CN107920138A (en) * 2016-10-08 2018-04-17 腾讯科技(深圳)有限公司 A kind of user's unifying identifier generation method, apparatus and system
CN107465718A (en) * 2017-06-20 2017-12-12 晶赞广告(上海)有限公司 Across the ID recognition methods of application and device, storage medium, terminal
CN107608583A (en) * 2017-09-01 2018-01-19 广东欧珀移动通信有限公司 Using an exchange method, device, mobile terminal and computer-readable recording medium
CN107959757A (en) * 2017-12-11 2018-04-24 北京小米移动软件有限公司 User information processing method, device, APP servers and terminal device

Also Published As

Publication number Publication date
CN108777806A (en) 2018-11-09

Similar Documents

Publication Publication Date Title
CN108777806B (en) User identity recognition method, device and storage medium
CN110418151B (en) Bullet screen information sending and processing method, device, equipment and medium in live game
CN109525851B (en) Live broadcast method, device and storage medium
US10834479B2 (en) Interaction method based on multimedia programs and terminal device
US20190121537A1 (en) Information displaying method and device, and electronic device
CN109803152B (en) Violation auditing method and device, electronic equipment and storage medium
JP2017509938A (en) INTERACTION METHOD BASED ON MULTIMEDIA PROGRAM AND TERMINAL DEVICE
CN109754329B (en) Electronic resource processing method, terminal, server and storage medium
CN113485617B (en) Animation display method and device, electronic equipment and storage medium
CN113041611B (en) Virtual prop display method and device, electronic equipment and readable storage medium
US20170171621A1 (en) Method and Electronic Device for Information Processing
CN113824983B (en) Data matching method, device, equipment and computer readable storage medium
CN111298434B (en) Service processing method, device, equipment and storage medium
CN112215651A (en) Information prompting method and device, storage medium and electronic equipment
CN114095742A (en) Video recommendation method and device, computer equipment and storage medium
CN113573090A (en) Content display method, device and system in game live broadcast and storage medium
CN114283349A (en) Data processing method and device, computer equipment and storage medium
CN113438492A (en) Topic generation method and system in live broadcast, computer equipment and storage medium
CN111309210B (en) Method, device, terminal and storage medium for executing system functions
CN112055164A (en) Information interaction method, device, terminal and storage medium
WO2023109831A1 (en) Message processing method and apparatus and electronic device
CN109529321B (en) Information processing method, device and storage medium
CN110704656A (en) Picture processing method and device, storage medium and terminal
CN115225930B (en) Live interaction application processing method and device, electronic equipment and storage medium
CN115002496B (en) Information processing method and device of live broadcast platform, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant