CN111988215B - Method, equipment and computer readable medium for pushing user - Google Patents

Method, equipment and computer readable medium for pushing user Download PDF

Info

Publication number
CN111988215B
CN111988215B CN202010802948.4A CN202010802948A CN111988215B CN 111988215 B CN111988215 B CN 111988215B CN 202010802948 A CN202010802948 A CN 202010802948A CN 111988215 B CN111988215 B CN 111988215B
Authority
CN
China
Prior art keywords
user
target
lens
image information
fouling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010802948.4A
Other languages
Chinese (zh)
Other versions
CN111988215A (en
Inventor
陈文涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaoxing Jilian Technology Co ltd
Original Assignee
Shanghai Lianshang Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lianshang Network Technology Co Ltd filed Critical Shanghai Lianshang Network Technology Co Ltd
Priority to CN202010802948.4A priority Critical patent/CN111988215B/en
Publication of CN111988215A publication Critical patent/CN111988215A/en
Application granted granted Critical
Publication of CN111988215B publication Critical patent/CN111988215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/214Monitoring or handling of messages using selective forwarding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Studio Devices (AREA)

Abstract

The application aims to provide a method and equipment for pushing a user, and the method and equipment comprise the following steps: receiving target image information sent by first user equipment of a first user, wherein the target image information is sent in response to a sharing event of a social application in the first user equipment; determining a target fouling characteristic corresponding to the target image information; matching the target blur characteristics in one or more lens blur models; if one lens fouling model in the one or more lens fouling models is successfully matched with the target fouling characteristics, determining a second user corresponding to the lens fouling model; pushing the second user to the first user device; or sending the first user and the target image information to second user equipment corresponding to the second user. The method and the device can improve the success rate of friend establishment among users.

Description

Method, equipment and computer readable medium for pushing user
Technical Field
The present application relates to the field of communications, and in particular, to a technique for pushing users.
Background
With the rapid development of the mobile internet, the social life of people changes day by day. Due to the popularization of smart phones, social apps meet new social needs of people and quickly become an indispensable part of life of people. People can not meet the requirement of only adding known people or surrounding people as friends in the social app, people are exploring different strange friend adding experiences, and at present, some social apps recommend similar friends for users according to personal related information (such as hobbies and location information) of the users in the social apps, which is also one of the most widespread ways of recommending friends for the users in the social networks.
Disclosure of Invention
An object of the present application is to provide a method and an apparatus for pushing a user.
According to an aspect of the present application, there is provided a method for pushing a user, applied to a network device, the method including:
receiving target image information sent by first user equipment of a first user, wherein the target image information is sent in response to a sharing event of a social application in the first user equipment;
determining a target fouling characteristic corresponding to the target image information;
matching the target blur characteristics in one or more lens blur models;
if one lens fouling model in the one or more lens fouling models is successfully matched with the target fouling characteristics, determining a second user corresponding to the lens fouling model;
pushing the second user to the first user device; or sending the first user and the target image information to second user equipment corresponding to the second user.
According to another aspect of the present application, there is provided a method for pushing a user, applied to a first user equipment, the method including:
responding to a sharing event of a social application in first user equipment, and sending target image information to network equipment corresponding to the social application, wherein the first user equipment belongs to a first user;
and receiving a second user returned by the network equipment based on the target image information, wherein the lens defacement model corresponding to the second user is successfully matched with the target defacement characteristic corresponding to the target image information.
According to yet another aspect of the present application, there is provided a method for pushing a user, the method comprising:
in response to a sharing event of a social application in first user equipment, the first user equipment sends target image information to network equipment corresponding to the social application, wherein the first user equipment belongs to a first user;
the network equipment receives the target image information, determines a target fouling characteristic corresponding to the target image information, and matches the target fouling characteristic in one or more lens fouling models;
if one lens contamination model in the one or more lens contamination models is successfully matched with the target contamination characteristic, the network device determines a second user corresponding to the lens contamination model, and pushes the second user to the first user device; or sending the first user and the target image information to second user equipment corresponding to the second user
The first user equipment receives the second user.
According to an aspect of the present application, there is provided a network device for pushing a user, the device comprising:
a one-to-one module, configured to receive target image information sent by a first user device of a first user, where the target image information is sent in response to a sharing event of a social application in the first user device;
a second module for determining a target contamination characteristic corresponding to the target image information;
a third module for matching the target blur characteristics in one or more lens blur models;
the fourth module is used for determining a second user corresponding to one lens defacement model if one lens defacement model in the one or more lens defacement models is successfully matched with the target defacement characteristic;
a fifth module, configured to push the second user to the first user equipment; or sending the first user and the target image information to second user equipment corresponding to the second user.
According to another aspect of the present application, there is provided a first user equipment for pushing a user, the first user equipment comprising:
the first user equipment is used for responding to a sharing event of a social application in first user equipment and sending target image information to network equipment corresponding to the social application, wherein the first user equipment belongs to a first user;
and a second module, configured to receive a second user returned by the network device based on the target image information, where a lens defacement model corresponding to the second user is successfully matched with a target defacement feature corresponding to the target image information.
According to another aspect of the present application, there is provided an apparatus for pushing a user, the apparatus comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of any of the methods described above.
According to another aspect of the application, there is provided a computer readable medium storing instructions that, when executed, cause a system to perform the operations of any of the methods described above.
Compared with the prior art, the network device receives target image information sent by a first user device, wherein the target image information is sent in response to a sharing event of a social application in the first user device, and is matched in one or more lens defacement models according to target defacement characteristics corresponding to the target image information to determine a second user corresponding to a lens defacement model which is successfully matched, and the second user is sent to the first user device; or sending the first user and the target image information to second user equipment corresponding to the second user. Therefore, actual shooting user equipment and users corresponding to the target image shared by the first user are accurately matched, the fact that the same taste and interest points exist between the first user and the users is shown to a certain extent, the probability of building a friend relationship between the first user and the users is improved, and therefore a foundation is provided for building the friend relationship between the first user and the users subsequently. On the other hand, when the network device determines that the first user has the possibility of embezzling the original picture, the first user and the target image information are sent to second user equipment corresponding to the second user for the second user to confirm and report subsequently.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a system topology according to the present application;
FIG. 2 illustrates a system method diagram for pushing users according to one embodiment of the present application;
FIG. 3 shows a flowchart of a method for pushing a user, applied to a network device, according to another embodiment of the present application;
FIG. 4 shows a flowchart of a method for pushing a user, applied to a first user equipment, according to yet another embodiment of the present application;
FIG. 5 shows an apparatus diagram of a network device for pushing a user according to one embodiment of the present application;
FIG. 6 shows an apparatus diagram of a first user equipment for pushing a user according to one embodiment of the present application;
FIG. 7 shows a schematic diagram of yet another apparatus for pushing users, according to an embodiment of the present application;
FIG. 8 illustrates an exemplary system that can be used to implement the various embodiments described in this disclosure.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., Central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include forms of volatile Memory, Random Access Memory (RAM), and/or non-volatile Memory in a computer-readable medium, such as Read Only Memory (ROM) or Flash Memory. Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random-Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The device referred to in this application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product, such as a smart phone, a tablet computer, etc., capable of performing human-computer interaction with a user (e.g., human-computer interaction through a touch panel), and the mobile electronic product may employ any operating system, such as an Android operating system, an iOS operating system, etc. The network Device includes an electronic Device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded Device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or network servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will understand that the above-described apparatus is merely exemplary, and that other existing or future existing apparatus, as may be suitable for use in the present application, are intended to be encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Fig. 1 illustrates an exemplary scenario of the present application, where a first user holds a first user device, a social application is installed in the first user device, the first user device establishes a communication connection with a network device corresponding to the social application in a wired or wireless manner, based on the communication connection, the network device receives target image information sent by the first user device, where the target image information is sent in response to a sharing event of the social application in the first user device, and the network device matches, based on a target blur characteristic corresponding to the target image information, in one or more lens blur models to obtain a lens blur model matching the target blur characteristic and a second user corresponding to the lens blur model, and sends the second user to the first user device, where the first user device includes, but is not limited to, a mobile phone, and sends the second user device to the first user device, Tablet and notebook computers and the like (having touch screens).
Referring to the system shown in fig. 1, fig. 2 shows a system method for pushing users according to one embodiment of the present application, the method comprising:
in response to a sharing event of a social application in first user equipment, the first user equipment sends target image information to network equipment corresponding to the social application, wherein the first user equipment belongs to a first user;
the network equipment receives the target image information, determines a target fouling characteristic corresponding to the target image information, and matches the target fouling characteristic in one or more lens fouling models;
if one lens fouling model in the one or more lens fouling models is successfully matched with the target fouling characteristics, the network equipment determines a second user corresponding to the lens fouling model and pushes the second user to the first user equipment; or sending the first user and the target image information to second user equipment corresponding to the second user;
the first user equipment receives the second user.
Fig. 3 shows a method for pushing a user according to an embodiment of the present application, applied to a network device, where the method includes step S101, step S102, step S103, step S104, and step S105.
Specifically, in step S101, a network device receives target image information sent by a first user device of a first user, where the target image information is sent in response to a sharing event of a social application in the first user device. For example, a first user holds first user equipment, a social application is installed in the first user equipment, and in response to a sharing event in the social application, the first user equipment sends target image information corresponding to the sharing event to network equipment corresponding to the social application, where the sharing event includes, but is not limited to, a picture, a video, and the like sharing operation performed by the first user in a social network space of the social application, or a picture, a video, and the like sharing operation performed by the first user in a social window (for example, a conversation window of a single person or multiple persons) of the social application, and the target image information includes, but is not limited to, a picture or a video with image information, and the like sent by the user equipment.
In step S102, the network device determines a target contamination characteristic corresponding to the target image information. For example, the network device acquires the target image information, identifies the target image information to determine a target contamination characteristic corresponding to the target image information, and uses the target contamination characteristic as a target contamination characteristic of the user equipment that captured the target image information (or a target contamination characteristic corresponding to a lens used by the user equipment), where the means for identifying the target image information to acquire the target contamination characteristic includes, but is not limited to, an image contamination detection model (e.g., speckle detection of opencv image characteristic detection), a detection method based on image statistics (e.g., firstly detecting a contaminated area, and then detecting a specific stain), and the like. Wherein the target defacement characteristics include defacement phenomena (e.g., generation of irregular marks, stains, and imaging distortions in image imaging) corresponding to mark characteristics (e.g., irregular scratches, lens stains, perspective characteristics unique to a user device lens) specific to a user device capturing the target image information. It will be understood by those skilled in the art that the above-described methods of identifying target insult characteristics are exemplary only and that other existing or future identification methods, as may be applicable to the present application, are intended to be within the scope of the present application and are hereby incorporated by reference.
In step S103, the network device matches the target blur characteristics in one or more lens blur models. For example, the one or more lens blur models respectively correspond to second user equipment of one or more second users, wherein each lens blur model is generated by training the user equipment corresponding to the lens blur model directly by using image information samples shot by a camera. In some embodiments, the matching the target blur characteristics in one or more lens blur models includes: inputting the target defacement characteristic into a target lens defacement model corresponding to the first user equipment to determine a matching success probability of the target defacement characteristic and the target lens defacement model; and if the matching success probability is smaller than or equal to a first probability threshold, matching the target fouling characteristics in one or more lens fouling models. The target lens contamination model is used for characterizing contamination characteristics of an image pickup device (for example, including a camera or a camera lens) in the first user equipment, for example, the network device preferentially determines whether the target contamination characteristics are formed by an image shot by the current first user equipment, and performs an operation of matching with other multiple lens contamination models on the premise that the target image information is excluded from being directly shot by the camera of the first user equipment, so that efficiency of confirming a user equipment source of the target image information is improved. For example, the network device generates a target lens blur model corresponding to a first user device in advance so as to perform preferential matching on target image information, and in some embodiments, the target lens blur model corresponding to the first user device is generated by performing machine learning training on a plurality of blur feature samples, where the plurality of blur feature samples include a plurality of sample image information, and the plurality of sample image information are obtained by directly shooting by the first user device. For example, the network device first needs to acquire training data by training a target lens defacement model corresponding to the first user device, where the training data includes a plurality of defacement feature samples. On the premise that a target lens fouling model corresponding to the first user equipment is generated in advance, a basis is provided for subsequent sample matching. Meanwhile, the origin of the target image information can be accurately determined by matching after the target lens fouling model is generated. In some embodiments, the network device obtains the plurality of sample image information. On the premise of obtaining the information of the plurality of sample images, a sample basis is provided for training a target lens fouling model. In some embodiments, said obtaining said plurality of sample image information comprises: acquiring source information of one or more local image information in the first user equipment; and if the source information of at least one local image information in the one or more local image information comprises the information obtained by shooting by the camera of the first user equipment, determining the at least one local image information as a plurality of sample image information. For example, the network device collects all image information stored in an album of the first user device, and identifies the source information of all image information, where the source information of all image information includes, but is not limited to, the source information is stored after the first user device directly photographs through a camera, the source information is obtained by the first user device from the outside (for example, obtained by downloading from the internet or stored after other users transmit to the first user device), the manner of identifying the source information includes identifying through an exif information viewer, or identifying the format of each image in all image information, for example, the format stored after the user device directly photographs is generally JPG; the file format of the downloading and caching is png and the like, or the first user equipment identifies source information of all image information stored in the album, wherein the identification mode includes determining whether the image information is directly shot by the first user equipment through a camera and then stored according to a folder in which all the image information is stored (for example, the image information directly shot by the first user equipment through the camera is stored in a system-defined album and is different from the image information stored in other modes), and on the basis of obtaining the source of the image information, the network equipment determines at least one piece of local image information directly shot by a camera (for example, a camera) of the first user equipment as the plurality of pieces of sample image information. The image information acquired through the camera of the first user equipment accords with the training data characteristics of a target lens defiling model corresponding to the first user equipment. Then, the network device takes the plurality of sample image information as an impairment feature sample for training the target lens impairment model, for example, the network device inputs the impairment feature sample into a neural network model for training to obtain a trained target lens impairment model, where the target lens impairment model corresponding to the first user device is used to determine whether the image information belongs to the impairment feature corresponding to the input image information and is directly captured by a camera of the first user device. For example, the network device inputs the target fouling characteristics into a target lens fouling model corresponding to the first user device to determine a matching probability between the target fouling model and the model, and if the matching success probability is smaller than or equal to a first probability threshold, the network device determines that the probability that a target image corresponding to the target fouling characteristics is directly captured by the first user device is smaller than a preset probability threshold (for example, the target image information is not captured by the first user device of the current first user), and based on this, provides a basis for subsequently performing matching with other lens fouling models. In some embodiments, the matching the target blur characteristics in one or more lens blur models comprises: determining insult type information for the target insult characteristic; and matching in one or more lens defacement models according to the defacement type information. On the premise that the fouling type information of the target fouling characteristics is determined, the efficiency of model matching can be improved. For example, the target insult characteristic includes one or more insult type information (e.g., S1, S2, S3), the network device may determine and intersect lens insult models corresponding to S1, S2, S3, for example, after matching the characteristic information of S1, S2, S3 (the characteristic information is used to characterize the insult characteristics of the corresponding insult type, such as different shapes, sizes, etc.) in one or more lens insult models, wherein the model matching S1 includes M1 and M2, the model matching S2 includes M2 and M3, the model matching S3 includes M2 and M4, and the network device determines the model matching the target insult characteristic as M2; or, for another example, the target contamination characteristic includes one or more contamination type information (e.g., S1, S2, S3), where the number of information corresponding to S1, S2, and S3 is 100, 200, and 300, the network device determines that the main contamination type of the target contamination model is S3, determines a model with a matching degree greater than a preset matching degree threshold value after matching the characteristic information of S3 in one or more lens contamination models, and determines that the model is a model matching the target contamination characteristic.
In some embodiments, the insult type information comprises at least one of:
1) stains are present;
2) distortion of the pillow shape;
3) barrel distortion;
4) linear distortion;
5) other types of distortion;
6) presence of deformity; for example, corresponding to different user devices, there may be a difference in the stain feature type corresponding to the image information captured and imaged by the camera of the user device, and based on this, the network device may determine the user device to which the image information belongs according to the stain feature type of the image information, and further determine the capturing user of the image information. On the premise that the fouling type information comprises a plurality of types, almost all the fouling type information which is common can be contained, so that the matching of the lens fouling model is more widely applicable. In some embodiments, the matching in one or more lens blur models according to the blur type information includes: determining at least one candidate lens blur model from the one or more lens blur models, wherein a target training sample containing the blur type information exists in a plurality of training samples for training the candidate lens blur model, and the number of the target training samples is greater than a first number threshold; and matching feature information corresponding to the fouling type information in the target fouling features in the at least one candidate lens fouling model. For example, the network device obtains information about the number of contamination characteristic samples corresponding to each of the one or more lens contamination models when being trained, and takes the information about the number of contamination characteristic samples corresponding to one lens contamination model (for example, labeled as M1) when being trained as an example, the network device determines the information about one or more contamination types corresponding to the contamination characteristic samples, and counts the number of times that the information about the one or more contamination types appears, so as to obtain the quantity information corresponding to each contamination type information in an accumulated manner, for example, the number of contamination types existing in the contamination characteristic samples corresponding to the contamination model when being trained is 100, the number of contamination types existing with malformed contamination types is 200, and the number of contamination types existing with linear distortion is 300, the network device determines that the contamination type of the target contamination characteristic includes linear distortion, the network device determines that the contamination type of the corresponding contamination characteristic sample when being trained is M1 is 300, and if the number of the lens contamination models is larger than the first number threshold (e.g., 250), the network device uses the lens contamination model (M1) as a candidate lens contamination model, and so on, the network device may determine at least one candidate lens contamination model, and determine feature information corresponding to the contamination type information in the target contamination feature (the feature information is used for characterizing contamination characteristics of the corresponding contamination type, such as slightly different shapes, sizes, and so on), the network device matches the feature information of the contamination type information in the at least one candidate lens contamination model, and performs parallel matching in the candidate lens contamination model according to the feature information of the contamination type information, so that accurate matching may be performed, the number of model matching is reduced, and matching efficiency is also improved. In some embodiments, the matching, in the at least one candidate lens blur model, the feature information corresponding to the blur type information in the target blur features includes: and taking a candidate lens fouling model to be matched from the at least one candidate lens fouling model, matching the feature information corresponding to the fouling type information in the target fouling feature with the candidate lens fouling model to be matched, and traversing the at least one candidate lens fouling model until one lens fouling model and the target fouling feature are successfully matched if the matching is unsuccessful. For example, the network device takes out a candidate lens blur model to be matched from the at least one candidate lens blur model, matching the characteristic information of the fouling type information of the target fouling characteristic in the candidate lens fouling model to be matched, and if the candidate lens fouling model to be matched outputs the characteristic information indicating the fouling type information and the matching probability of the model is smaller than a preset probability threshold value, the network equipment determines that the user equipment corresponding to the candidate lens fouling model to be matched does not shoot the target image information and sequentially matches with the subsequent candidate lens fouling models, and when the output obtained after the candidate lens contamination model inputs the feature information corresponding to the contamination type information in the target contamination feature is successful in matching, the network equipment stops subsequent model matching. In this case, the matching accuracy and the matching success probability can be improved, and a basis is provided for a second user corresponding to a lens blur model which is successfully matched in a subsequent determination.
In step S104, if there is one lens blur model in the one or more lens blur models that is successfully matched with the target blur characteristic, the network device determines a second user corresponding to the one lens blur model. For example, if there is a lens blur model successfully matched with the target blur characteristic, the network device determines the user device corresponding to the lens blur model, and takes the user to which the user device belongs as the second user corresponding to the lens blur model. Here, the network device determines that the target image information is captured by a user device owned by the second user.
In step S105, the network device pushes the second user to the first user device; or sending the first user and the target image information to second user equipment corresponding to the second user. For example, after the network device determines the second user, if the second user and the first user do not belong to a friend relationship in the social application, the network device sends the personal data of the second user (for example, a homepage, an electronic business card or account information of the second user in the social application) to the first user device, where the personal data of the second user is used for the first user to add a friend, so that the subsequent first user actively initiates a friend establishment request to the second user, thereby improving the social experience of the user. For another example, when the network device confirms that target image information of a social space published by a first user to a social application is shot by a second user, the network device determines that the first user has the possibility of stealing an original picture, and sends the first user and the target image information to second user equipment corresponding to the second user for confirmation and subsequent reporting by the second user. In some embodiments, the target image information includes first user identification information of the first user; the pushing the second user to the first user equipment includes: determining whether the first user and the second user correspond to different users according to the first user identification information; and if the first user and the second user correspond to different users, pushing the second user to the first user equipment. For example, after the network device determines the second user, the second user is compared with the first user identification information according to the user identification of the second user, and if the second user and the first user identification information correspond to different users, the network device pushes the second user to the first user device according to the first user identification information carried by the target image information, so that an accurate pushing effect is achieved.
For example, a first user holds a first user device, a social application is installed in the first user device, the first user publishes a state (for example, a landscape picture) in a social space of the social application, the network device acquires the picture and identifies a target blur characteristic (for example, pillow distortion exists) corresponding to the picture, and the network device matches the target blur characteristic in a plurality of lens blur models, wherein each lens blur model in the plurality of lens blur models corresponds to one user device, and the lens blur model is trained from image information captured by the user device. And if the image is input into one of the lens defacement models and the output of one of the models is successful, the network equipment takes the user to which the user equipment corresponding to the model belongs as a second user and pushes the homepage of the second user to the first user equipment.
In some embodiments, the method further includes step S106 (not shown), in step S106, the network device acquires second image information sent by a second user device of the second user, where the second image information is sent in response to a sharing event of a social application in the second user device; inputting a second contamination characteristic corresponding to the second image information into a target lens contamination model corresponding to the first user equipment to determine a matching success probability of the second contamination characteristic and the target lens contamination model; and if the matching success probability is larger than a second probability threshold, pushing the first user to the second user equipment, and sending a friend adding success notification to the first user and the second user. For example, the second user holds a second user device, a social application is installed in the second user device, the second user publishes second image information in a social space of the social application, in some embodiments, the publishing time of the second image information is later than that of the target image information, the network device inputs a second defacement feature corresponding to the second image information into a target lens defacement model corresponding to the first user device to determine whether the second image information is captured by a camera of the first user device, if the matching success probability of the second defacement feature and the target lens defacement model is greater than a second probability threshold, the network device determines that the second image information is captured by the camera of the first user device, and determines that there are mutually appreciated components between the first user and the second user, so as to push the first user to the second user device, on the premise that the second user is pushed to the first user, the network device can establish friend contact between the first user and the second user based on manual friend adding operation of the first user or the second user or automatically, and therefore efficiency of friend establishment is improved.
Fig. 4 shows a method for pushing a user, applied to a first user equipment, according to an embodiment of the present application, and the method includes step 201 and step S202.
Specifically, in step S201, a first user device sends target image information to a network device corresponding to a social application in response to a sharing event of the social application in the first user device, where the first user device belongs to a first user. The sharing event includes, but is not limited to, a picture, a video, and other sharing operations performed by a first user in a social network space of a social application, or a picture, a video, and other sharing operations performed by the first user in a social window (for example, a conversation window of a single person or multiple persons) of the social application, and the target image information includes, but is not limited to, a picture or a video with image information, and other information sent by the user equipment, for example, in response to the sharing event in the social application, the first user equipment sends the target image information corresponding to the sharing event to network equipment corresponding to the social application. In some embodiments, the method further comprises, before step a, step S203 (not shown), in step S203, the first user equipment enabling the matching settings in the social application. For example, on the premise that the matching setting is enabled in the social application, the network device performs friend matching and pushing for the first user based on the target image information after receiving the target image information, instead of only presenting the target image information in the social space of the social application. Thereby providing a basis for subsequent network devices to match based on the target image information.
In step S202, the first user device receives a second user returned by the network device based on the target image information, where a lens blur model corresponding to the second user is successfully matched with a target blur feature corresponding to the target image information. For example, after receiving the target image information, the network device performs matching in one or more lens defacement models according to the target defacement feature corresponding to the target image information, and if one lens defacement model in the one or more lens defacement models is successfully matched with the target defacement feature, the network device determines a second user corresponding to the one lens defacement model, and pushes the second user to the first user device, so that the subsequent second user can actively initiate a friend establishment request to the first user.
For example, a first user holds a first user device, a social application is installed in the first user device, the first user publishes a state (for example, a landscape picture) in a social space of the social application, the network device acquires the picture and identifies a target blur characteristic (for example, pillow distortion exists) corresponding to the picture, and the network device matches the target blur characteristic in a plurality of lens blur models, wherein each of the plurality of lens blur models corresponds to one user device, and the lens blur model is trained from image information captured by the user device. If the image is input into one of the lens defacement models and the output of one of the lens defacement models is successful, the network device takes the user to which the user device corresponding to the model belongs as a second user, and pushes the homepage of the second user to the first user device.
Fig. 5 shows a network device for pushing a user according to an embodiment of the present application, which includes a one-to-one module 101, a two-to-two module 102, a three-to-three module 103, a four-to-four module 104, and a five-to-one module 105.
Specifically, the one-to-one module 101 is configured to receive target image information sent by a first user device of a first user, where the target image information is sent in response to a sharing event of a social application in the first user device. For example, a first user holds first user equipment, a social application is installed in the first user equipment, and in response to a sharing event in the social application, the first user equipment sends target image information corresponding to the sharing event to network equipment corresponding to the social application, where the sharing event includes, but is not limited to, a picture, a video, and the like sharing operation performed by the first user in a social network space of the social application, or a picture, a video, and the like sharing operation performed by the first user in a social window (for example, a conversation window of a single person or multiple persons) of the social application, and the target image information includes, but is not limited to, a picture or a video with image information, and the like sent by the user equipment.
A second module 102 is configured to determine a target contamination characteristic corresponding to the target image information. For example, the network device acquires the target image information, identifies the target image information to determine a target contamination characteristic corresponding to the target image information, and uses the target contamination characteristic as a target contamination characteristic of the user equipment that captured the target image information (or a target contamination characteristic corresponding to a lens used by the user equipment), where the means for identifying the target image information to acquire the target contamination characteristic includes, but is not limited to, an image contamination detection model (e.g., speckle detection of opencv image characteristic detection), a detection method based on image statistics (e.g., firstly detecting a contaminated area, and then detecting a specific stain), and the like. Wherein the target defacement feature includes a defacement phenomenon (e.g., generation of irregular marks, stains, and imaging distortions in image imaging) corresponding to a mark feature (e.g., irregular scratches, lens stains, a perspective feature unique to a user device lens) unique to a user device that captured the target image information. It will be understood by those skilled in the art that the above-described methods of identifying and obtaining a target insult characteristic are exemplary only and that other methods of identification now known or later developed, such as may be applicable to the present application, are intended to be within the scope of the present application and are hereby incorporated by reference.
A triple module 103 for matching the target blur characteristics in one or more lens blur models. For example, the one or more lens blur models respectively correspond to second user equipment of one or more second users, wherein each lens blur model is generated by training the user equipment corresponding to the lens blur model directly by using image information samples shot by a camera. In some embodiments, the matching the target blur characteristics in one or more lens blur models includes: inputting the target fouling characteristics into a target lens fouling model corresponding to the first user equipment to determine the matching success probability of the target fouling characteristics and the target lens fouling model; and if the matching success probability is smaller than or equal to a first probability threshold, matching the target fouling characteristics in one or more lens fouling models. The operation of matching the target blur characteristics in one or more lens blur models is the same as or similar to the embodiment shown in fig. 3, and therefore, the description thereof is omitted, and the description is incorporated herein by reference. In some embodiments, the target lens blur model corresponding to the first user equipment is generated by machine learning training of a plurality of blur feature samples, where the plurality of blur feature samples include a plurality of sample image information, and the plurality of sample image information is directly captured by the first user equipment. The operation of the target lens blur model corresponding to the first user equipment is the same as or similar to that of the embodiment shown in fig. 3, and therefore, the description thereof is omitted, and the description thereof is incorporated herein by reference. In some embodiments, the network device obtains the plurality of sample image information. The operation related to obtaining the image information of the plurality of samples is the same as or similar to the embodiment shown in fig. 3, and therefore is not repeated herein, and is included herein by reference. In some embodiments, said obtaining said plurality of sample image information comprises: acquiring source information of one or more local image information in the first user equipment; and if the source information of at least one local image information in the one or more local image information comprises the information obtained by shooting by the camera of the first user equipment, determining the at least one local image information as a plurality of sample image information. The operation of obtaining the sample image information is the same as or similar to that of the embodiment shown in fig. 3, and therefore is not repeated herein, and is incorporated by reference herein. In some embodiments, the matching the target blur characteristics in one or more lens blur models includes: determining insult type information for the target insult characteristic; and matching in one or more lens defacement models according to the defacement type information. The operation of matching the target blur characteristics in one or more lens blur models is the same as or similar to the embodiment shown in fig. 3, and therefore, the description thereof is omitted, and the description is incorporated herein by reference. In some embodiments, the insult type information comprises at least one of:
1) stains are present;
2) distortion of pillow shape;
3) barrel distortion;
4) linear distortion;
5) other types of distortion;
6) presence of deformity; the operation of the related insult type information is the same as or similar to the embodiment shown in FIG. 3, and thus is not described again herein and is incorporated by reference. In some embodiments, the matching in one or more lens blur models according to the blur type information includes: determining at least one candidate lens blur model from the one or more lens blur models, wherein a target training sample containing the blur type information exists in a plurality of training samples for training the candidate lens blur model, and the number of the target training samples is greater than a first number threshold; and matching feature information corresponding to the fouling type information in the target fouling features in the at least one candidate lens fouling model. The operation of matching in one or more lens blur models according to the blur type information is the same as or similar to that of the embodiment shown in fig. 3, and therefore, the detailed description thereof is omitted, and the description is incorporated herein by reference. In some embodiments, the matching, in the at least one candidate lens contamination model, feature information corresponding to the contamination type information in the target contamination feature includes: and taking a candidate lens fouling model to be matched from the at least one candidate lens fouling model, matching the feature information corresponding to the fouling type information in the target fouling feature with the candidate lens fouling model to be matched, and traversing the at least one candidate lens fouling model until one lens fouling model and the target fouling feature are successfully matched if the matching is unsuccessful. The operation of matching the blur type information in the at least one candidate lens blur model is the same as or similar to that of the embodiment shown in fig. 3, and therefore, the detailed description thereof is omitted, and the disclosure is incorporated herein by reference. A fourth module 104, configured to determine a second user corresponding to one lens blur model if there is one lens blur model in the one or more lens blur models that is successfully matched with the target blur characteristic. For example, if there is a lens blur model successfully matched with the target blur characteristic, the network device determines the user device corresponding to the lens blur model, and takes the user to which the user device belongs as the second user corresponding to the lens blur model. Here, the network device determines that the target image information is captured by the user device owned by the second user.
A fifth module 105, configured to push the second user to the first user equipment; or sending the first user and the target image information to second user equipment corresponding to the second user. For example, after determining the second user, the network device sends the profile of the second user (e.g., a homepage, an electronic business card, or account information of the second user in a social application) to the first user device, so that the subsequent second user actively initiates a friend establishment request to the first user. In some embodiments, the target image information includes first user identification information of the first user; the pushing the second user to the first user equipment includes: determining whether the first user and the second user correspond to different users according to the first user identification information; and if the first user and the second user correspond to different users, pushing the second user to the first user equipment. The operation of the target image information including the first user identification information of the first user is the same as or similar to the embodiment shown in fig. 3, and therefore is not repeated herein, and is included herein by reference.
Here, the specific implementation of the above-mentioned one-to-one module 101, the two-to-one module 102, the one-to-three module 103, the one-to-four module 104 and the one-to-five module 105 is the same as or similar to the embodiment of step S101, step S102, step S103, step S104 and step S105 in fig. 3, and therefore, the description thereof is omitted, and the detailed implementation is incorporated herein by reference.
In some embodiments, the network device further includes a sixth module 106 (not shown), where the sixth module 106 is configured to obtain second image information sent by a second user device of the second user, where the second image information is sent in response to a sharing event of a social application in the second user device; inputting a second contamination characteristic corresponding to the second image information into a target lens contamination model corresponding to the first user equipment to determine a matching success probability of the second contamination characteristic and the target lens contamination model; and if the matching success probability is larger than a second probability threshold, pushing the first user to the second user equipment, and sending a friend adding success notification to the first user and the second user.
The specific implementation manner of the sixth module 106 is the same as or similar to the embodiment of the step S106, and thus is not described herein again, and is included herein by reference.
Fig. 6 shows a first user equipment for pushing a user, which includes a first module 201 and a second module 202 according to an embodiment of the present application.
Specifically, the first-second module 201 is configured to send, in response to a sharing event of a social application in first user equipment, target image information to network equipment corresponding to the social application, where the first user equipment belongs to a first user. The sharing event includes, but is not limited to, a picture, a video, and other sharing operations performed by a first user in a social network space of a social application, or a picture, a video, and other sharing operations performed by the first user in a social window (for example, a conversation window of a single person or multiple persons) of the social application, and the target image information includes, but is not limited to, a picture or a video with image information, and other information sent by the user equipment, for example, in response to the sharing event in the social application, the first user equipment sends the target image information corresponding to the sharing event to network equipment corresponding to the social application. In some embodiments, the method further comprises a second-third module 203 (not shown) before the second-first module 201, the second-third module 203 being configured to enable matching settings in the social application. The specific implementation manner of the two or three modules 203 is the same as or similar to the embodiment of the step S203, and therefore, the detailed description is omitted, and the detailed implementation manner is included herein by reference.
A second-second module 202, configured to receive a second user returned by the network device based on the target image information, where a lens blur model corresponding to the second user is successfully matched with a target blur characteristic corresponding to the target image information. For example, after receiving the target image information, the network device performs matching in one or more lens defacement models according to the target defacement characteristics corresponding to the target image information, and if one lens defacement model in the one or more lens defacement models is successfully matched with the target defacement characteristics, the network device determines a second user corresponding to the one lens defacement model, and pushes the second user to the first user device, so that the subsequent second user actively initiates a friend establishment request to the first user.
Here, the specific implementation of the two-in-one module 201 and the two-in-two module 202 is the same as or similar to the embodiment of the steps S201 and S202 in fig. 4, and therefore, the detailed description is omitted, and the detailed implementation is incorporated herein by reference.
Fig. 7 shows a system apparatus for pushing a user according to an embodiment of the present application, the apparatus comprising:
in response to a sharing event of a social application in first user equipment, the first user equipment sends target image information to network equipment corresponding to the social application, wherein the first user equipment belongs to a first user;
the network equipment receives the target image information, determines a target fouling characteristic corresponding to the target image information, and matches the target fouling characteristic in one or more lens fouling models;
if one lens fouling model in the one or more lens fouling models is successfully matched with the target fouling characteristics, the network equipment determines a second user corresponding to the lens fouling model and pushes the second user to the first user equipment; or, the first user and the target image information are sent to second user equipment corresponding to the second user;
and the first user equipment receives the second user.
In addition to the methods and apparatus described in the embodiments above, the present application also provides a computer readable storage medium storing computer code that, when executed, performs the method as described in any of the preceding claims.
The present application also provides a computer program product, which when executed by a computer device performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 8 illustrates an exemplary system that can be used to implement the various embodiments described herein;
in some embodiments, as shown in FIG. 8, the system 300 can be implemented as any of the devices in the various embodiments described. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 310 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
System memory 315 may be used, for example, to load and store data and/or instructions for system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 315 may include a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 320 may be accessible over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controllers of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310 to form a system on a chip (SoC).
In various embodiments, system 300 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Additionally, some portions of the present application may be applied as a computer program product, such as computer program instructions, which, when executed by a computer, may invoke or provide the method and/or solution according to the present application through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Computer-readable media herein can be any available computer-readable storage media or communication media that can be accessed by a computer.
Communication media includes media by which communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied in a modulated data signal, for example, in a wireless medium such as a carrier wave or similar mechanism such as is embodied as part of spread spectrum techniques. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM); and magnetic and optical storage devices (hard disk, magnetic tape, CD, DVD); or other now known media or later developed that are capable of storing computer-readable information/data for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (16)

1. A method for pushing a user is applied to a network device, wherein the method comprises the following steps:
receiving target image information sent by first user equipment of a first user, wherein the target image information is sent in response to a sharing event of a social application in the first user equipment;
determining a target fouling characteristic corresponding to the target image information;
matching the target blur characteristics in one or more lens blur models;
if one lens fouling model in the one or more lens fouling models is successfully matched with the target fouling characteristics, determining a second user corresponding to the lens fouling model;
pushing the second user to the first user device; or sending the first user and the target image information to second user equipment corresponding to the second user.
2. The method of claim 1, wherein the matching the target blur characteristics in one or more lens blur models comprises:
inputting the target fouling characteristics into a target lens fouling model corresponding to the first user equipment to determine the matching success probability of the target fouling characteristics and the target lens fouling model;
and if the matching success probability is smaller than or equal to a first probability threshold, matching the target fouling characteristics in one or more lens fouling models.
3. The method of claim 2, wherein the method further comprises:
acquiring second image information sent by second user equipment of the second user, wherein the second image information is sent in response to a sharing event of a social application in the second user equipment;
inputting a second contamination characteristic corresponding to the second image information into a target lens contamination model corresponding to the first user equipment to determine a matching success probability of the second contamination characteristic and the target lens contamination model;
and if the matching success probability is larger than a second probability threshold, pushing the first user to the second user equipment, and sending a friend adding success notification to the first user and the second user.
4. The method of claim 2, wherein the target lens blur model corresponding to the first user device is generated by machine learning training a plurality of blur feature samples, wherein the plurality of blur feature samples comprise a plurality of sample image information, and the plurality of sample image information is directly captured by the first user device.
5. The method of claim 4, wherein the method further comprises:
acquiring the plurality of sample image information.
6. The method of claim 5, wherein said obtaining the plurality of sample image information comprises:
acquiring source information of one or more local image information in the first user equipment;
and if the source information of at least one local image information in the one or more local image information comprises the information obtained by shooting by the camera of the first user equipment, determining the at least one local image information as a plurality of sample image information.
7. The method of any of claims 1 to 6, wherein the matching the target blur characteristics in one or more lens blur models comprises:
determining insult type information for the target insult characteristic;
and matching in one or more lens defacement models according to the defacement type information.
8. The method of claim 7, wherein the matching in one or more lens blur models according to the blur type information comprises:
determining at least one candidate lens blur model from the one or more lens blur models, wherein a target training sample containing the blur type information exists in a plurality of training samples for training the candidate lens blur model, and the number of the target training samples is greater than a first number threshold;
and matching feature information corresponding to the fouling type information in the target fouling features in the at least one candidate lens fouling model.
9. The method of claim 8, wherein the matching, in the at least one candidate lens blur model, feature information in the target blur feature corresponding to the blur type information comprises:
and taking a candidate lens fouling model to be matched from the at least one candidate lens fouling model, matching the feature information corresponding to the fouling type information in the target fouling feature with the candidate lens fouling model to be matched, and traversing the at least one candidate lens fouling model until one lens fouling model and the target fouling feature are successfully matched if the matching is unsuccessful.
10. The method of claim 7, wherein the insult type information comprises at least one of:
stains are present;
distortion of the pillow shape;
barrel distortion;
linear distortion;
other types of distortion;
deformities exist.
11. The method of claim 1, wherein the target image information includes first user identification information of the first user;
the pushing the second user to the first user equipment includes:
determining whether the first user and the second user correspond to different users according to the first user identification information; and if the first user and the second user correspond to different users, pushing the second user to the first user equipment.
12. A method for pushing a user is applied to a first user equipment, wherein the method comprises the following steps:
responding to a sharing event of a social application in first user equipment, and sending target image information to network equipment corresponding to the social application, wherein the first user equipment belongs to a first user;
and receiving a second user returned by the network equipment based on the target image information, wherein the lens contamination model corresponding to the second user is successfully matched with the target contamination characteristic corresponding to the target image information.
13. The method of claim 12, wherein the method further comprises, in response to a sharing event of a social application in a first user device, sending target image information to a network device corresponding to the social application, wherein the first user device belongs to a first user, before:
enabling matching settings in the social application.
14. A method for pushing a user, wherein the method comprises:
in response to a sharing event of a social application in first user equipment, the first user equipment sends target image information to network equipment corresponding to the social application, wherein the first user equipment belongs to a first user;
the network equipment receives the target image information, determines a target fouling characteristic corresponding to the target image information, and matches the target fouling characteristic in one or more lens fouling models;
if one lens contamination model in the one or more lens contamination models is successfully matched with the target contamination characteristic, the network device determines a second user corresponding to the lens contamination model, and pushes the second user to the first user device; or, the first user and the target image information are sent to second user equipment corresponding to the second user;
the first user equipment receives the second user.
15. An apparatus for pushing a user, the apparatus comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the method of any one of claims 1 to 13.
16. A computer-readable medium storing instructions that, when executed by a computer, cause the computer to perform operations of any of the methods of claims 1-13.
CN202010802948.4A 2020-08-11 2020-08-11 Method, equipment and computer readable medium for pushing user Active CN111988215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010802948.4A CN111988215B (en) 2020-08-11 2020-08-11 Method, equipment and computer readable medium for pushing user

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010802948.4A CN111988215B (en) 2020-08-11 2020-08-11 Method, equipment and computer readable medium for pushing user

Publications (2)

Publication Number Publication Date
CN111988215A CN111988215A (en) 2020-11-24
CN111988215B true CN111988215B (en) 2022-07-12

Family

ID=73434340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010802948.4A Active CN111988215B (en) 2020-08-11 2020-08-11 Method, equipment and computer readable medium for pushing user

Country Status (1)

Country Link
CN (1) CN111988215B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693534A (en) * 2012-05-25 2012-09-26 北京航空航天大学 Quick image stain removing method based on image inpainting technology
CN109724993A (en) * 2018-12-27 2019-05-07 北京明略软件系统有限公司 Detection method, device and the storage medium of the degree of image recognition apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104281650A (en) * 2014-09-15 2015-01-14 南京锐角信息科技有限公司 Friend search recommendation method and friend search recommendation system based on interest analysis
CN107094166A (en) * 2016-12-12 2017-08-25 口碑控股有限公司 A kind of service information sending method and device
US10574881B2 (en) * 2018-02-15 2020-02-25 Adobe Inc. Smart guide to capture digital images that align with a target image model
CN111368219B (en) * 2020-02-27 2024-04-26 广州腾讯科技有限公司 Information recommendation method, device, computer equipment and storage medium
CN111369632A (en) * 2020-03-06 2020-07-03 北京百度网讯科技有限公司 Method and device for acquiring internal parameters in camera calibration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693534A (en) * 2012-05-25 2012-09-26 北京航空航天大学 Quick image stain removing method based on image inpainting technology
CN109724993A (en) * 2018-12-27 2019-05-07 北京明略软件系统有限公司 Detection method, device and the storage medium of the degree of image recognition apparatus

Also Published As

Publication number Publication date
CN111988215A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN107750466B (en) Pairing nearby devices using synchronized alert signals
RU2648625C2 (en) Method and apparatus for determining spatial parameter by using image, and terminal device
CN110336735B (en) Method and equipment for sending reminding message
TW202209151A (en) Network training pedestrian re-identification method and storage medium
CN109388722B (en) Method and equipment for adding or searching social contact
JP2014112834A (en) Super-resolution image generation method, device, computer program product
US11238563B2 (en) Noise processing method and apparatus
CN111162990B (en) Method and equipment for presenting message notification
CN111222509A (en) Target detection method and device and electronic equipment
CN109710866B (en) Method and device for displaying pictures in online document
CN111272388A (en) Method and device for detecting camera flash lamp
CN111988215B (en) Method, equipment and computer readable medium for pushing user
CN112818719A (en) Method and device for identifying two-dimensional code
CN111177062B (en) Method and device for providing reading presentation information
CN105808677A (en) Picture deleting method and device and electronic equipment
US9710893B2 (en) Image resolution modification
CN110765390A (en) Method and equipment for publishing shared information in social space
CN110635995A (en) Method, device and system for realizing interaction between users
CN113657245B (en) Method, device, medium and program product for human face living body detection
US8811756B2 (en) Image compression
CN112702257B (en) Method and device for deleting friend application
CN115100492A (en) Yolov3 network training and PCB surface defect detection method and device
CN110751003B (en) Method and equipment for acquiring target data information of two-dimension code
CN109657514B (en) Method and equipment for generating and identifying two-dimensional code
CN107256151A (en) Processing method, device and the terminal of page rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231107

Address after: 312500 Wangjiangshan Village, Nanming Street, Xinchang County, Shaoxing City, Zhejiang Province

Patentee after: Shaoxing Jilian Technology Co.,Ltd.

Address before: 200120 2, building 979, Yun Han Road, mud town, Pudong New Area, Shanghai

Patentee before: SHANGHAI LIANSHANG NETWORK TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right