CN111291829A - Automatic determination method and system for selected pictures - Google Patents

Automatic determination method and system for selected pictures Download PDF

Info

Publication number
CN111291829A
CN111291829A CN202010138602.9A CN202010138602A CN111291829A CN 111291829 A CN111291829 A CN 111291829A CN 202010138602 A CN202010138602 A CN 202010138602A CN 111291829 A CN111291829 A CN 111291829A
Authority
CN
China
Prior art keywords
picture
score
determining
content
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010138602.9A
Other languages
Chinese (zh)
Inventor
谢杨易
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010138602.9A priority Critical patent/CN111291829A/en
Publication of CN111291829A publication Critical patent/CN111291829A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The embodiment of the specification provides a method and a system for automatically determining a selected picture. The method comprises the following steps: acquiring pictures, and determining at least one picture category based on the pictures by using a clustering algorithm; for each of the at least one picture category, determining a composite score for each picture, the composite score comprising at least a picture quality score, a picture content richness score and/or a picture freshness score, the picture quality score representing a clarity dimension and/or a content integrity dimension of the picture, the picture content richness score representing a number of categories of content contained in the picture, the picture freshness score representing a time dimension of the picture; determining a culled picture based on the composite score of the picture.

Description

Automatic determination method and system for selected pictures
Technical Field
The present disclosure relates to the field of data processing, and more particularly, to a method and system for automatically determining a culled picture.
Background
In recent years, with the rapid development of artificial intelligence and big data technology, more and more products acquire user suggestions through pictures and characters uploaded by users. Service personnel need to acquire information from pictures uploaded by a large number of users and improve products or services based on the information in the pictures. The picture uploaded by the user is large in data volume, high in repeatability and few in available pictures, and high-quality pictures need to be screened from a large number of pictures.
Accordingly, it is desirable to provide a method and system for automatically determining a pick picture.
Disclosure of Invention
One aspect of the present description provides a method of automatically determining a pick picture. The method comprises the following steps: acquiring pictures, and determining at least one picture category based on the pictures by using a clustering algorithm; for each of the at least one picture category, determining a composite score for each picture, the composite score comprising at least a picture quality score, a picture content richness score and/or a picture freshness score, the picture quality score representing a clarity dimension and/or a content integrity dimension of the picture, the picture content richness score representing a number of categories of content contained in the picture, the picture freshness score representing a time dimension of the picture; determining a culled picture based on the composite score of the picture.
In some embodiments, said determining at least one picture category based on said picture utilizing a clustering algorithm comprises: encoding the picture by using a preset algorithm; calculating picture distances based on the coded pictures, wherein the picture distances reflect the similarity degree between the pictures; determining at least one picture category using a clustering algorithm based on the picture distance.
In some embodiments, the picture distance comprises a cosine distance.
In some embodiments, the determining a composite score for each picture comprises, for each of the at least one picture category: for each of the at least one picture category, determining a picture quality score for each picture using a picture quality evaluation model; determining the picture content richness score of each picture by using a picture content richness evaluation model; determining a picture freshness score of each picture based on the picture uploading time; determining the composite score for the picture based on the picture quality score, picture content richness score, and picture freshness score for the picture.
In some embodiments, said determining said composite score for said picture based on said picture quality score, picture content richness score, and picture freshness score for said picture comprises: and determining the comprehensive score of the picture based on the picture quality score, the picture content richness score and the picture freshness score of the picture according to a preset weight value.
In some embodiments, the picture quality evaluation model is obtained by: obtaining a sample picture; labeling the quality score of the sample picture based on the definition and/or content integrity of the sample picture; and inputting the marked sample picture into the first initial model for training, and determining a picture quality evaluation model.
In some embodiments, the picture content richness evaluation model is obtained by: obtaining a sample picture; marking the content richness score of the sample picture based on the number of the categories containing the content in the sample picture; and inputting the marked sample picture into a second initial model for training, and determining a picture content richness evaluation model.
In some embodiments, the method further comprises: and preprocessing the picture, and cleaning the picture with low definition and incomplete content.
Another aspect of the present description provides an automatic determination system of a pick picture. The system comprises: a clustering module configured to obtain pictures, determine at least one picture category based on the pictures using a clustering algorithm; a picture evaluation module configured to determine, for each of the at least one picture category, a composite score for each picture, the composite score including at least a picture quality score, a picture content richness score, and/or a picture freshness score, the picture quality score representing a clarity dimension and/or a content integrity dimension of the picture, the picture content richness score representing a number of categories of content included in the picture, the picture freshness score representing a time dimension of the picture; a selection module configured to determine a pick picture based on the composite score of the picture.
Another aspect of the present description provides a system for automatic determination of a pick picture, wherein the system comprises a processor and a memory; the memory is configured to store instructions that, when executed by the processor, cause the apparatus to implement the method as previously described.
Another aspect of the present specification provides a computer-readable storage medium storing computer instructions which, when read by a computer, cause the computer to perform the method as described above.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a pick picture automatic determination system, shown in accordance with some embodiments of the present description;
FIG. 2 is a block diagram of a pick picture automatic determination system shown in accordance with some embodiments of the present description;
FIG. 3 is an exemplary flow diagram of a method of automatically determining a pick picture, shown in accordance with some embodiments of the present description;
FIG. 4 is an exemplary flow diagram of a picture quality assessment model determination method according to some embodiments of the present description; and
fig. 5 is an exemplary flowchart of a picture content richness evaluation model determination method shown in some embodiments according to the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Although various references are made herein to certain modules or units in a system according to embodiments of the present description, any number of different modules or units may be used and run on the client and/or server. The modules are merely illustrative and different aspects of the systems and methods may use different modules.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
FIG. 1 is a schematic diagram of an application scenario of a pick picture automatic determination system, shown in accordance with some embodiments of the present description.
The automatic determination system 100 for selected pictures can screen out pictures with high definition, complete content and rich content from a large number of pictures. The cull picture automatic determination system 100 may be an online platform including a server 110, a network 120, a user terminal 130, and a database 140. The server 110 may include a processor 112.
In some embodiments, the server 110 may be a single server or a server farm. The server farm can be centralized or distributed (e.g., server 110 can be a distributed system). In some embodiments, the server 110 may be local or remote. For example, server 110 may access information and/or data stored in user terminal 130 and/or database 140 via network 120. As another example, server 110 may be directly connected to user terminal 130 and/or database 140 to access stored information and/or data. In some embodiments, the server 110 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, across clouds, multiple clouds, the like, or any combination of the above. In some embodiments, server 110 may be implemented on a computing device, which may include one or more components.
In some embodiments, the server 110 may include one processor 112. Processor 112 may process information and/or data related to the pick picture determination to perform one or more functions described herein. For example, processor 112 may perform calculations based on picture data obtained from user terminal 130 and/or database 140. In some embodiments, processor 112 may include one or more processors (e.g., a single-chip processor or a multi-chip processor). Merely by way of example, the processor 112 may include one or more hardware processors such as a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an application specific instruction set processor (ASIP), an image processing unit (GPU), a physical arithmetic processing unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a microcontroller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, or the like, or any combination of the above.
Network 120 may facilitate the exchange of information and/or data. In some embodiments, one or more components (e.g., server 110, user terminal 130, and database 140) in pick picture determination system 100 may send information and/or data to other components in pick picture automatic determination system 100 via network 120. For example, server 110 may obtain picture data from database 140 via network 120. In some embodiments, the network 120 may be any one of, or a combination of, a wired network or a wireless network. Merely by way of example, network 120 may include a cable network, a wired network, a fiber optic network, a remote communication network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, the like, or any combination of the above. In some embodiments, network 120 may include one or more network switching points. For example, network 120 may include wired or wireless network switching points, such as base stations and/or Internet switching points 120-1, 120-2, … …, through which one or more components of pick picture determination system 100 may connect to network 120 to exchange data and/or information.
In some embodiments, the user may be a user of the subscriber terminal 130. In some embodiments, the user may obtain a system-culled picture via user terminal 130. For example, the user may use the user terminal 130 to operate (e.g., click, search, etc.) on an application to obtain a picture of the system's choosing. As another example, the system may obtain search keywords frequently entered by the user to determine categories of pictures of interest to the user, and then screen relevant pick pictures from the picture data for recommendation to the user. In some embodiments, the system may obtain the user-entered pictures through user terminal 130 to determine the category of pictures that need to be culled. For example, the system may screen out a pick picture from the pictures obtained by the user terminal 130. In some embodiments, the user of the user terminal 130 may be a person other than the first user. For example, user a of the user terminal 130 may use the user terminal 130 to perform a picture search for user B. In some embodiments, user terminal 130 may receive information and/or instructions from server 110, such as receiving a pick picture recommended by server 110.
In some embodiments, the user terminal 130 may include a mobile device 130-1, a tablet 130-2, a laptop 130-3, a vehicle built-in device 130-4, the like, or any combination of the above. In some embodiments, mobile device 130-1 may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, the like, or any combination of the above. In some embodiments, the smart home devices may include smart lighting devices, control devices for smart appliances, smart monitoring devices, smart televisions, smart cameras, interphones, and the like, or any combination thereof. In some embodiments, the wearable device may include a bracelet, footwear, glasses, helmet, watch, clothing, backpack, smart accessory, or the like, or any combination of the above. In some embodiments, the smart mobile device may include a mobile phone, a personal digital assistant, a gaming device, a navigation device, a POS machine, a laptop computer, a desktop computer, the like, or any combination of the above. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glasses, virtual reality eyeshields, augmented reality helmets, augmented reality glasses, augmented reality eyeshields, and the like, or any combination thereof. For example, the virtual reality device and/or augmented reality device may include Google glass, Oculus RiftTM, Hololens, Gear VRTM, and the like. In some embodiments, the in-vehicle device 130-4 may include an in-vehicle computer, an in-vehicle television, or the like.
Database 140 may store data and/or instructions. In some embodiments, database 140 may store data obtained from user terminal 130, such as text, pictures, and the like. In some embodiments, database 140 may store data and/or instructions for execution or use by server 110, which server 110 may execute or use to implement the example methods described herein. In some embodiments, database 140 may include mass storage, removable memory, volatile read-write memory, read-only memory (ROM), the like, or any combination of the above. Exemplary mass storage devices may include magnetic disks, optical disks, solid state disks, and the like. Exemplary removable memories may include flash drives, floppy disks, optical disks, memory cards, compact disks, magnetic tape, and the like. Exemplary volatile read-only memory can include Random Access Memory (RAM). Exemplary random access memories may include Dynamic Random Access Memory (DRAM), double-data-rate synchronous dynamic random access memory (DDR SDRAM), Static Random Access Memory (SRAM), thyristor random access memory (T-RAM), zero-capacitance random access memory (Z-RAM), and the like. Exemplary read-only memories may include mask read-only memory (MROM), programmable read-only memory (PROM), erasable programmable read-only memory (PEROM), electrically erasable programmable read-only memory (EEPROM), compact disk read-only memory (CD-ROM), digital versatile disk read-only memory, and the like. In some embodiments, database 140 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, across clouds, multiple clouds, the like, or any combination of the above.
In some embodiments, database 140 may be connected to network 120 to communicate with one or more components (e.g., server 110, user terminal 130, etc.) in pick picture automatic determination system 100. One or more components of the pick picture automatic determination system 100 may access data or instructions stored in the database 140 via the network 120. In some embodiments, database 140 may be directly connected to or in communication with one or more components (e.g., server 110, user terminal 130, etc.) in pick picture automatic determination system 100. In some embodiments, database 140 may be part of server 110.
In some embodiments, one or more components of the pick picture automatic determination system 100 (e.g., server 110, user terminal 130, etc.) may possess permission to access the database 140. In some embodiments, one or more components of the pick picture automatic determination system 100 may obtain or determine information related to the pick picture when one or more conditions are satisfied. For example, after acquiring the picture data, the server 110 may encode the picture acquired from the user terminal 130.
In some embodiments, information interaction by one or more components of pick picture auto-determination system 100 may be accomplished by requesting a service. The object of the service request may be any product. In some embodiments, the product may be a tangible product or an intangible product. Tangible products may include food, medicine, merchandise, chemical products, appliances, clothing, cars, houses, luxury goods, and the like, or any combination of the above. Intangible products may include service products, financial products, knowledge products, internet products, and the like, or any combination of the above. The internet products may include personal host products, website products, mobile internet products, commercial host products, embedded products, and the like, or any combination of the above. The mobile internet product can be used for mobile terminal software, programming, systems, etc. or any combination of the above. The mobile terminal may include a tablet, laptop, mobile phone, Personal Digital Assistant (PDA), smart watch, POS machine, vehicle computer, vehicle television, wearable device, and the like, or any combination thereof. The product may be any software and/or application programming used in a computer or mobile phone, for example. The software and/or application programming may be related to social interaction, shopping, transportation, entertainment, learning, investment, etc., or any combination of the above.
FIG. 2 is a block diagram of a pick picture automatic determination system shown in accordance with some embodiments of the present description.
As shown in fig. 2, the processor 112 may include a clustering module 210, a picture evaluation module 220, a selection module 230, and a training module 240. The modules may be all or part of the hardware circuitry of the processor 112. A module may also be an application or a set of instructions that are read and executed by a processor. Further, a module may be a combination of hardware circuitry and applications/instructions. For example, a module may be part of the processor 112 when the processor executes an application/set of instructions.
The clustering module 210 may be used to classify pictures. In some embodiments, the clustering module 210 may determine at least one picture category using a clustering algorithm. In some embodiments, the clustering module 210 may include an obtaining unit 203, a calculating unit 205, and a clustering unit 207. The acquisition unit 203 may be used to acquire a picture. In some embodiments, the obtaining unit 203 may obtain the picture from the user terminal 130 and/or the database 140. In some embodiments, the obtaining unit 203 may obtain the picture from a storage device (e.g., the database 140) through the network 120. The calculation unit 205 may be configured to calculate a picture distance. In some embodiments, the calculation unit 205 may encode the picture feature vector using an encoding algorithm and calculate the inter-picture distance from the encoded picture. The clustering unit 207 may be configured to classify the pictures into one or more picture categories. In some embodiments, the clustering unit 207 may divide the picture into at least one picture category using a clustering algorithm based on the picture distance. For more contents of clustering pictures, reference may be made to other parts of this specification (such as step 310-step 330 in fig. 3 and their related descriptions), and details are not repeated here.
The picture evaluation module 220 may be used to score pictures. In some embodiments, for each of the at least one picture category, the picture evaluation module 220 may determine a composite score for each picture. In some embodiments, the composite score may include a picture quality score, a picture content richness score, and/or a picture freshness score, among others. For more details on the scoring of the pictures, reference may be made to other parts of the description (such as step 340 and the related description thereof), and the details are not repeated here.
The selection module 230 may be used to determine a pick picture based on the composite score of the picture. For more details on determining the pick picture, reference may be made to step 350 in fig. 3 and its associated description, which are not repeated herein.
The training module 240 may be used to determine a picture quality evaluation model and/or a picture content richness evaluation model. In some embodiments, the training module 250 may label the obtained sample picture, and determine a picture quality evaluation model and/or a picture content richness evaluation model based on the labeled sample picture. For more contents of the picture quality evaluation model and the picture content richness evaluation model, reference may be made to other parts of this specification (such as fig. 4, fig. 5 and their related descriptions), and details are not repeated here.
It should be understood that the system and its modules shown in FIG. 2 may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules in this specification may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It should be noted that the above description of the processor 112 and its modules is merely for convenience of description and is not intended to limit the present disclosure to the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. For example, in some embodiments, the clustering module 210, the picture evaluation module 220, the selection module 230, and the training module 240 disclosed in fig. 2 may be different modules in a system, or may be a module that implements the functions of two or more of the above modules. As another example, the processor 112 may also include a communication module to communicate with other components, such as to transmit the pick picture generated by the pick picture determination system to a server or user terminal. The modules in the processor 112 may share one memory module, or each module may have its own memory module. Such variations are within the scope of the present disclosure.
FIG. 3 is an exemplary flow diagram of a method of automatically determining a pick picture, shown in accordance with some embodiments of the present description.
In step 310, a picture is obtained. Specifically, this step may be implemented by the clustering module 210 (e.g., the obtaining unit 203).
In some embodiments, the pictures may include pictures uploaded by the user. In some embodiments, the uploaded pictures may include one or any combination of personal opinions/suggestions, search/search content, ratings (e.g., bad, good, etc.), consultations, and the like. For example, the user may upload a page screenshot in the use process of a certain application program through the user terminal 130, and feed back the problems of "slow page jump", "lack of interactive function", and the like of the application program and/or the service platform to the service background. For another example, the user may upload a picture through the user terminal 130 for searching for content similar to or related to the picture. In some alternative embodiments, the pictures may also include other publicly available pictures, for example, publicly published pictures of individuals or units obtained from a database of open sources, which is not limited in this specification. In some embodiments, the pictures may include one or any combination of human, social, sports, animal, plant, and the like. In some embodiments, information recommendations of interest may be provided to the user based on the obtained user upload pictures. For example, relevant news, articles, videos and other contents can be recommended to the user according to the pictures input by the user. In some embodiments, better services may be provided to the user based on the obtained user upload pictures. For example, the interactive function of the application program can be improved according to the feedback picture related to 'lack of interactive function' input by the user. It is to be understood that the illustrations presented herein are exemplary only and are not intended to limit the scope of the embodiments presented. In some embodiments, the clustering module 210 (e.g., the obtaining unit 203) may obtain the picture from the user terminal 130 and/or the database 140. In some embodiments, the obtaining unit 203 may obtain the picture from a storage device (such as the database 140) through the network 120. In some alternative embodiments, the obtaining unit 203 may obtain the picture from the open source database. In some embodiments, the number of pictures acquired may be multiple. In some embodiments, the multiple pictures acquired may be of the same category or of different categories.
Step 320, calculating the picture distance. In particular, this step may be implemented by the clustering module 210 (e.g., the computing unit 205).
The picture distance may be used to reflect the closeness of content between pictures. In some embodiments, the smaller the picture distance value, the closer the content is contained between pictures. In some embodiments, the calculation unit 205 may encode the picture using a preset algorithm, and calculate the picture distance based on the encoded picture. Encoding may be the process of converting information from one form or format to another. For example, a process of converting text into a text vector, a process of converting a picture into a picture vector, and the like. In some embodiments, the pre-set algorithm may include, but is not limited to, a combination of one or more of a BERT model (Bidirectional encoding recovery from transformations), an ImageNet model, a Long short-Term Memory network (LSTM), word vector coding, an ELMO model, a GPT model, and the like. By encoding a picture, the picture can be converted into a vector to facilitate its processing by the system (e.g., distance calculation, clustering, etc.). In some embodiments, the calculation unit 205 may perform picture coding based on the picture features by extracting the picture features. For example, the calculation unit 205 may extract features of the picture and encode the picture by using an ImageNet-based model. The ImageNet dataset contains a tag set of 120 ten thousand images from 1000 classes, and features extracted based on a model of the ImageNet dataset not only have scale invariance and representativeness, but also are more comprehensive and richer. For example, the computing unit 205 may directly input the picture into the ImageNet-based model, and obtain an output high-dimensional picture feature vector (i.e., an encoded picture).
In some embodiments, the calculation unit 205 may calculate the picture distance by calculating a cosine distance between the encoded pictures. The cosine distance is a measure for measuring the difference between two individuals by using the cosine value of the included angle of two vectors in a vector space, and whether the two vectors point to the same direction or not is mainly determined by the cosine value of the angle between the two vectors, and the cosine distance is not sensitive to absolute values. For example, for two pictures X and Y, when the vector angle between the two pictures tends to 0, the cosine value approaches 1, which indicates that the two pictures are closer, and when the vector angle between the two pictures tends to 90 degrees, the cosine value approaches 0, the two pictures are not closer. For the acquired pictures, the selection of the selected picture is more concerned about the relative difference of different pictures in the content direction (e.g. whether the contents contained in the two pictures are similar), and the cosine distance can be used to improve the accuracy of the determination of the selected picture. In some embodiments, the picture distance may be any reasonable range of values. For example, the range of picture distances may be the range of cosine distances [0, 1 ]. A cosine distance of 1 indicates that the two vectors are close to each other (i.e., the content included in different pictures is close to each other), and a cosine distance of 0 indicates that the two vectors are far from each other (i.e., the content included in different pictures is different from each other). In some embodiments, the picture distance may also be determined by other distance algorithms, which is not limited in this specification. For example, the distance algorithm may be one or a combination of euclidean distance algorithm, Jaccard distance algorithm, manhattan distance algorithm, edit distance algorithm, and the like. In some embodiments, the picture distance may include a distance between two pictures in the obtained pictures, a distance between adjacent pictures, and/or a distance between related pictures (e.g., a distance between an uncleaved picture and a cluster center picture). For example, for the picture A, B, C, the calculation unit 205 may calculate the distance between a and B, A and C, B and C, respectively, or between a and B, B and C, respectively, or between a and B (or a and C, or B and C) only. In some embodiments, the sum of the picture distance and the picture similarity value is 1. For example, if the picture similarity value is 0, the picture distance may be 1; if the picture similarity value is 0.6, the picture distance may be 0.4.
In some embodiments, the clustering module 210 may pre-process the pictures. The pictures with low definition and incomplete content can be cleaned through preprocessing, and the pictures are standardized (such as the pictures are scaled to be uniform in size) so as to facilitate the subsequent processing of the pictures. In some embodiments, the picture pre-processing method may include manual processing and/or machine processing. For example, the clustering module 210 may filter out the lower-definition pictures in the pictures by setting the picture definition threshold based on an edge analysis method, a transform domain method, and/or a pixel statistics method.
Step 330, determining at least one picture category by using a clustering algorithm based on the picture distance. Specifically, this step may be implemented by the clustering module 210 (e.g., clustering unit 207).
In some embodiments, the clustering unit 207 may classify the pictures into at least one picture category using a clustering algorithm based on the picture distance. The subset pictures within each picture category have higher similarity (contain closer content), and the pictures within each picture category have lower similarity (contain greater content difference). In some embodiments, the number of subsets per picture category may be the same or different. For example, one picture category may contain 5 pictures and another picture category may contain 7 pictures. For another example, the number of pictures included in both picture content categories may be 6. In some embodiments, the clustering algorithm may include one or any combination of DBSCAN (sensitivity Based Spatial of application Noise) clustering, K-MEANS (K-MEANS) clustering, OPTIC (ordering of Points To identification of the clustering structure) clustering, HDBSCAN (high sensitivity-Based Spatial clustering of application with Noise) clustering, and the like.
Preferably, the clustering unit 207 may implement picture clustering using an HDBSCAN clustering algorithm. HDBSCAN clustering is an optimization algorithm of DBSCAN clustering algorithm. The biggest difference between HDBSCAN clustering and DBSCAN clustering is that the radius of the clustered clusters and the number of the clustered clusters do not need to be known or specified in advance, so that the clustering problem with different densities can be solved. When the method for determining the selected picture provided by the embodiment of the specification clusters the picture, the number and the radius of clusters to be divided (namely the number of the picture categories to be divided and the size of each category) cannot be predicted, and the HDBSCAN clustering can be used for increasing the robustness of a clustering algorithm to noise points and removing unnecessary pictures, so that the picture is more in line with the requirement, and the accuracy and the efficiency of determining the selected picture are improved. It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
For each of the at least one picture category, a composite score for each picture is determined, step 340. Specifically, this step may be implemented by the picture evaluation module 220.
In some embodiments, the composite score for a picture may include one of a picture quality score, a picture content richness score, a picture freshness score, or any combination thereof. In some embodiments, the picture evaluation module 220 may determine a quality score for each picture using a picture quality evaluation model. In some embodiments, the picture quality assessment model may include a deep learning model. For example, the deep learning model may include, but is not limited to, a recurrent neural network model, a convolutional neural network module, a long-short term memory network model, and the like. More details about the picture quality evaluation model can be found in other parts of this specification (e.g., fig. 4 and its related description), and are not described herein again. In some embodiments, the picture quality score may represent a clear dimension of the picture and/or a picture content full dimension. In some embodiments, the picture quality score may be in the range of 0-10. The higher the picture quality score, the higher the picture quality. For example, a clear and content-complete picture may have a quality score of 10. In some embodiments, the clustered pictures are input into the picture quality evaluation model, and a picture quality score corresponding to the pictures can be output.
In some embodiments, the picture evaluation module 220 may determine a picture content richness score for each picture using a picture content richness evaluation model. In some embodiments, the picture content richness score may represent the number of categories of content contained in the picture. The more kinds of content contained in a picture, the higher the picture content richness score. For example, a picture that contains animals, plants, buildings at the same time has a higher content richness score than a picture that contains animals, plants, or buildings only. In some embodiments, the picture content richness evaluation model may include a deep learning model. For example, the deep learning model may include, but is not limited to, a recurrent neural network model, a convolutional neural network module, a long-short term memory network model, and the like. More descriptions about the image content richness evaluation model can be found in other places of this specification (e.g., fig. 5 and its related descriptions), and are not described herein again.
In some embodiments, the picture evaluation module 220 may determine a picture freshness score for each picture based on the picture upload time. The closer the uploading time of the picture is to the current time, the higher the picture freshness score. For example, pictures uploaded on day 4 and day 15 have a freshness score higher than the freshness score of pictures uploaded on day 4 and day 5. In some embodiments, the picture evaluation module 220 may determine a composite score for a picture based on the picture quality score, the picture content richness score, and the picture freshness score of the picture. In some embodiments, the picture evaluation module 220 may determine a composite score of the picture based on the picture quality score, the picture content richness score, and the picture freshness score of the picture according to a preset weight value. For example, a weight value of a picture quality score of 50%, a weight value of a picture richness score of 30%, and a weight value of a picture freshness score of 20% may be set. In some embodiments, the preset weighting value may be any reasonable value, which is not limited in this specification.
Step 350, determining the selected picture based on the comprehensive score of the picture. In particular, this step may be implemented by the selection module 230.
In some embodiments, for each of the at least one category of pictures, the selection module 230 may determine the picture with the highest composite score in each category as the pick picture. In some embodiments, for each of the at least one picture category, the selection module 230 may determine a pick picture based on the ranking result of the composite scores of the pictures. For example, the selection module 230 may sort the composite scores of all pictures from large to small, and select the top L pictures as the selected pictures of the group of pictures. Wherein L is any integer value, such as 3, 5, 7, and the like. In some embodiments, the sorting manner may include a sorting model, a manual sorting, a rule sorting, and the like, which is not limited in this specification. In some embodiments, the selection module 230 may determine the pick picture based on a preset scoring threshold. For example, the composite score threshold of the pictures may be set to 8 points, and pictures with composite scores higher than 8 points in each category may be determined as the pick pictures.
By classifying the pictures and then grading the pictures in each category, the pictures with high grades in each category are selected as the selected pictures of the category, and the repeatability of the selected pictures can be reduced. For example, for a mobile phone public transportation travel service (that is, a user can download an electronic bus card in a mobile phone to realize card swiping and taking of a public transportation and a subway), for mobile phone screenshots which are uploaded by users in different areas and have failed in payment or card swiping, the system can divide the screenshots into a category a, for screenshots which are uploaded by different users and are related to positioning failure, the system can divide the screenshots into another category B, and select selected pictures in the category a and the category B respectively. In this way, both category a and category B may be included in the determined pick picture. In some embodiments, the determined pick pictures may be used to improve service performance of the application and/or service platform. For example, for the selected picture selected from the above category a and/or category B, the problem "payment anomaly" and/or "location anomaly" fed back by the user in the selected picture can be extracted, and based on this, the problem in payment and/or location of the mobile phone bus travel service is improved, so as to provide better service for the user. It will be appreciated that the above application scenarios with respect to pick pictures are merely exemplary and not limiting of the present description. In some alternative embodiments, the pick pictures may be used in any other reasonable scenario, for example, recommending relevant pick pictures to a user based on the user's personal characteristics.
It should be noted that the above description of the process 300 is for illustration and description only and is not intended to limit the scope of the present disclosure. Various modifications and changes to flow 300 will be apparent to those skilled in the art in light of this description. However, such modifications and variations are intended to be within the scope of the present description. For example, in step 310, the pictures may be initially clustered, and then the distance between the picture at the center of the cluster and other pictures is calculated, and accurate clustering is performed based on the distance.
Fig. 4 is an exemplary flow diagram of a picture quality assessment model determination method according to some embodiments of the present description. In particular, this step may be implemented by the training module 240.
Step 410, a sample picture is obtained.
In some embodiments, the sample pictures may include pick pictures and/or non-pick pictures. In some embodiments, the sample pictures may include pictures in a publicly accessible data set. In some embodiments, the sample pictures may include pictures of one or more types of scenery, people, animals, plants, balls, chess, and the like. In some embodiments, training module 240 may obtain a sample picture from user terminal 130 and/or database 140. In some embodiments, training module 240 may retrieve the sample picture from a storage device (e.g., database 140) via network 120. In some embodiments, training module 240 may obtain a sample picture from an open source database.
And step 420, labeling the sample picture.
In some embodiments, the training module 240 may label the sample picture based on a combination of one or more of the clarity, content integrity, etc. of the sample picture. In some embodiments, the training module 240 may label the quality score of the sample picture based on the clarity and content integrity of the sample picture. In some embodiments, the quality score of the sample picture may be in the range of 0-10. For example, a picture with clear and complete content may be labeled as 10 points, a picture with clear but incomplete content may be labeled as 7 points, and a picture with unclear and incomplete content may be labeled as 3 points. In some alternative embodiments, the quality score of the sample picture may range from any reasonable value, such as 0 to 5, which is not limited by the present specification. In some embodiments, the labeling of the sample pictures may include manual labeling and/or computer labeling. For example, the definition and content integrity of the picture are judged manually. For another example, the definition and content integrity of the picture are judged by an image processing algorithm. For another example, the definition of the picture is judged manually, and the content integrity of the picture is determined by an image processing algorithm. In some embodiments, the system may randomly divide the sample data (i.e., sample picture) after labeling into a training set and a test set according to a certain proportion. In some embodiments, the division ratio may be 80% of the training set, 20% of the test set, or any other ratio. The training set can be used for training and determining a picture quality evaluation model; the test set can be used for testing the image quality evaluation model obtained by training.
And 430, inputting the marked sample picture into the first initial model for training, and determining a picture quality evaluation model.
In some embodiments, the first initial model may comprise a machine learning model. For example, the machine learning model may include a combination of one or more of an RNN (recurrent neural network) model, a CNN (convolutional neural network) model, and the like. In some embodiments, the training module 240 may train the first initial model using the labeled sample picture as an input of the first initial model and using the labeling result as a reference label. In some embodiments, the training module 240 may pre-process the training set using a pre-training model. In some embodiments, the training module 240 may directly input the training set into the machine model for training. In some embodiments, the output of the picture quality assessment model may be a quality assessment value of the picture.
It should be noted that the above description related to the flow 400 is only for illustration and description, and does not limit the applicable scope of the present specification. Various modifications and changes to flow 400 will be apparent to those skilled in the art in light of this description. However, such modifications and variations are intended to be within the scope of the present description.
Fig. 5 is an exemplary flowchart of a picture content richness evaluation model determination method shown in some embodiments according to the present description.
Step 510, a sample picture is obtained.
In some embodiments, training module 240 may obtain a sample picture from user terminal 130 and/or database 140. In some embodiments, training module 240 may retrieve the sample picture from a storage device (e.g., database 140) via network 120. In some embodiments, training module 240 may obtain a sample picture from an open source database. For further details of the sample picture acquisition, reference may be made to the rest of this specification (see fig. 4 and the related description thereof), and further details are not repeated herein.
And step 520, labeling the sample picture.
In some embodiments, the training module 240 may label the sample picture based on content contained in the sample picture. In some embodiments, the training module 240 may label the sample picture based on the number of categories of content contained in the sample picture. In some embodiments, training module 240 may label the sample pictures as different scores based on the number of categories of content contained in the sample pictures. In some embodiments, the annotation score for the sample picture may be in the range of 0-10. In some embodiments, the greater the number of categories that contain content in the sample picture, the higher the annotation score for the sample picture. For example, a picture containing characters, icons and symbols can be marked as 7 points, a picture containing icons can be marked as 3 points, and a picture containing icons and characters can be marked as 5 points. For another example, a picture including a person, an animal, a plant, and a building may be labeled with 5 points, a picture including a person, a plant, and an animal may be labeled with 3 points, a picture including a person and a plant may be labeled with 2 points, a picture including only a person may be labeled with 1 point, and the like. In some alternative embodiments, the annotation score of the sample picture may be any reasonable value, such as 0-5 points, 0-3 points, and the like, which is not limited in this specification. In some embodiments, the annotation manner can include a manual annotation and/or a computer annotation. The labeling manner of the sample picture in this step is similar to that of the sample picture in step 520, and is not described herein again.
And 530, inputting the marked sample picture into a second initial model for training, and determining a picture content richness evaluation model.
In some embodiments, the second initial model may comprise a machine learning model. For example, the machine learning model may include a combination of one or more of an RNN (recurrent neural network) model, a CNN (convolutional neural network) model, and the like. In some embodiments, the training module 240 may use the labeled sample picture as an input of the second initial model, and train the second initial model using the labeling result as a reference label. The training mode of the picture content richness evaluation model is similar to the training mode of the picture quality evaluation model, and more contents can be referred to in step 430, which is not described herein again.
It should be noted that the above description related to the flow 500 is only for illustration and description, and does not limit the applicable scope of the present specification. Various modifications and changes to flow 500 may occur to those skilled in the art, given the benefit of this description. However, such modifications and variations are intended to be within the scope of the present description.
The method has the advantages that (1) the selected picture is determined by utilizing the algorithm, the selected picture can be automatically and quickly generated, and the selection efficiency and accuracy of the selected picture are improved; (2) the selected pictures are determined in a mode of clustering and grading, the pictures with similar themes can be classified into one category, and the repetition rate of the selected pictures is reduced. It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present description may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of this description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present description may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of this specification may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, VisualBasic, Fortran2003, Perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing processing device or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (18)

1. A method of automatically determining a culled picture, comprising:
acquiring pictures, and determining at least one picture category based on the pictures by using a clustering algorithm;
for each of the at least one picture category, determining a composite score for each picture, the composite score comprising at least a picture quality score, a picture content richness score and/or a picture freshness score, the picture quality score representing a clarity dimension and/or a content integrity dimension of the picture, the picture content richness score representing a number of categories of content contained in the picture, the picture freshness score representing a time dimension of the picture;
determining a culled picture based on the composite score of the picture.
2. The method of automatically determining a choice picture of claim 1, said determining at least one picture category using a clustering algorithm based on said picture comprising:
encoding the picture by using a preset algorithm;
calculating picture distances based on the coded pictures, wherein the picture distances represent the similarity degree between the pictures;
determining at least one picture category using a clustering algorithm based on the picture distance.
3. The method of automatically determining a pick picture of claim 2, the picture distance comprising a cosine distance.
4. The method of automatically determining a pick picture of claim 1, the determining a composite score for each picture for each of the at least one picture category comprising:
for each of the at least one picture category, determining a picture quality score for each picture using a picture quality evaluation model;
determining the picture content richness score of each picture by using a picture content richness evaluation model;
determining a picture freshness score of each picture based on the picture uploading time;
determining the composite score for the picture based on the picture quality score, picture content richness score, and picture freshness score for the picture.
5. The method of automatically determining a pick picture of claim 4, the determining the composite score for the picture based on the picture quality score, picture content richness score, and picture freshness score for the picture comprising:
and determining the comprehensive score of the picture based on the picture quality score, the picture content richness score and the picture freshness score of the picture according to a preset weight value.
6. The method for automatically determining a choice picture according to claim 4, wherein said picture quality evaluation model is obtained by:
obtaining a sample picture;
labeling the quality score of the sample picture based on the definition and/or content integrity of the sample picture;
and inputting the marked sample picture into the first initial model for training, and determining a picture quality evaluation model.
7. The method for automatically determining a pick picture as claimed in claim 4, said picture content richness evaluation model being obtained by:
obtaining a sample picture;
marking the content richness score of the sample picture based on the number of the categories containing the content in the sample picture;
and inputting the marked sample picture into a second initial model for training, and determining a picture content richness evaluation model.
8. The method of automatically determining a pick picture of claim 1, the method further comprising:
and preprocessing the picture, and cleaning the picture with low definition and incomplete content.
9. An automatic determination system of a pick picture, comprising:
a clustering module configured to obtain pictures, determine at least one picture category based on the pictures using a clustering algorithm;
a picture evaluation module configured to determine, for each of the at least one picture category, a composite score for each picture, the composite score including at least a picture quality score, a picture content richness score, and/or a picture freshness score, the picture quality score representing a clarity dimension and/or a content integrity dimension of the picture, the picture content richness score representing a number of categories of content included in the picture, the picture freshness score representing a time dimension of the picture;
a selection module configured to determine a pick picture based on the composite score of the picture.
10. The automatic determination system of choice pictures of claim 9, the clustering module to:
encoding the picture by using a preset algorithm;
calculating picture distances based on the coded pictures, wherein the picture distances reflect the similarity degree between the pictures;
determining at least one picture category using a clustering algorithm based on the picture distance.
11. The system for automatically determining a pick picture of claim 10, the picture distance comprising a cosine distance.
12. The system for automatic determination of a pick picture of claim 9, the picture evaluation module to:
for each of the at least one picture category, determining a picture quality score for each picture using a picture quality evaluation model;
determining the picture content richness score of each picture by using a picture content richness evaluation model;
determining a picture freshness score of each picture based on the picture uploading time;
determining the composite score for the picture based on the picture quality score, picture content richness score, and picture freshness score for the picture.
13. The system for automatically determining a pick picture of claim 12, the picture evaluation module to:
and determining the comprehensive score of the picture based on the picture quality score, the picture content richness score and the picture freshness score of the picture according to a preset weight value.
14. The system for automatic determination of a pick picture of claim 12, the system further comprising a training module to:
obtaining a sample picture;
labeling the quality score of the sample picture based on the definition and/or content integrity of the sample picture;
and inputting the marked sample picture into the first initial model for training, and determining a picture quality evaluation model.
15. The system for automatic determination of a pick picture of claim 12, the system further comprising a training module to:
obtaining a sample picture;
marking the content richness score of the sample picture based on the number of the categories containing the content in the sample picture;
and inputting the marked sample picture into a second initial model for training, and determining a picture content richness evaluation model.
16. The system for automatically determining a pick picture of claim 9, the clustering module further to:
and preprocessing the picture, and cleaning the picture with low definition and incomplete content.
17. An automatic determination system of a pick picture, wherein the system comprises a processor and a memory; the memory is configured to store instructions that, when executed by the processor, cause the apparatus to implement the method of any of claims 1-8.
18. A computer-readable storage medium storing computer instructions which, when read by a computer, cause the computer to perform the method of any one of claims 1 to 8.
CN202010138602.9A 2020-03-03 2020-03-03 Automatic determination method and system for selected pictures Pending CN111291829A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010138602.9A CN111291829A (en) 2020-03-03 2020-03-03 Automatic determination method and system for selected pictures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010138602.9A CN111291829A (en) 2020-03-03 2020-03-03 Automatic determination method and system for selected pictures

Publications (1)

Publication Number Publication Date
CN111291829A true CN111291829A (en) 2020-06-16

Family

ID=71020295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010138602.9A Pending CN111291829A (en) 2020-03-03 2020-03-03 Automatic determination method and system for selected pictures

Country Status (1)

Country Link
CN (1) CN111291829A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784086A (en) * 2021-01-28 2021-05-11 北京有竹居网络技术有限公司 Picture screening method and device, storage medium and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105531701A (en) * 2014-07-04 2016-04-27 微软技术许可有限责任公司 Personalized trending image search suggestion
CN105631408A (en) * 2015-12-21 2016-06-01 小米科技有限责任公司 Video-based face album processing method and processing device
CN107194898A (en) * 2017-06-23 2017-09-22 携程计算机技术(上海)有限公司 The method for pushing of the methods of exhibiting of hotel's image, storage medium and hotel information
CN107944458A (en) * 2017-12-08 2018-04-20 北京维大成科技有限公司 A kind of image-recognizing method and device based on convolutional neural networks
CN109146856A (en) * 2018-08-02 2019-01-04 深圳市华付信息技术有限公司 Picture quality assessment method, device, computer equipment and storage medium
CN110717058A (en) * 2019-09-23 2020-01-21 Oppo广东移动通信有限公司 Information recommendation method and device and storage medium
CN110737795A (en) * 2019-10-16 2020-01-31 北京字节跳动网络技术有限公司 Photo album cover determining method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105531701A (en) * 2014-07-04 2016-04-27 微软技术许可有限责任公司 Personalized trending image search suggestion
CN105631408A (en) * 2015-12-21 2016-06-01 小米科技有限责任公司 Video-based face album processing method and processing device
CN107194898A (en) * 2017-06-23 2017-09-22 携程计算机技术(上海)有限公司 The method for pushing of the methods of exhibiting of hotel's image, storage medium and hotel information
CN107944458A (en) * 2017-12-08 2018-04-20 北京维大成科技有限公司 A kind of image-recognizing method and device based on convolutional neural networks
CN109146856A (en) * 2018-08-02 2019-01-04 深圳市华付信息技术有限公司 Picture quality assessment method, device, computer equipment and storage medium
CN110717058A (en) * 2019-09-23 2020-01-21 Oppo广东移动通信有限公司 Information recommendation method and device and storage medium
CN110737795A (en) * 2019-10-16 2020-01-31 北京字节跳动网络技术有限公司 Photo album cover determining method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张晓燕: "《在线群体创新中的外部信息支持研究》", 《上海:上海交通大学》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784086A (en) * 2021-01-28 2021-05-11 北京有竹居网络技术有限公司 Picture screening method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
US11405344B2 (en) Social media influence of geographic locations
US9218364B1 (en) Monitoring an any-image labeling engine
US20170300862A1 (en) Machine learning algorithm for classifying companies into industries
US9037600B1 (en) Any-image labeling engine
US20190332849A1 (en) Detection of near-duplicate images in profiles for detection of fake-profile accounts
CN110008397B (en) Recommendation model training method and device
WO2023011382A1 (en) Recommendation method, recommendation model training method, and related product
US10824915B2 (en) Artificial intelligence system for inspecting image reliability
WO2018195691A1 (en) New connection recommendations based on data attributes
WO2021155691A1 (en) User portrait generating method and apparatus, storage medium, and device
CN108897750B (en) Personalized place recommendation method and device integrating multiple contextual information
WO2018023329A1 (en) Quality industry content mixed with friend's posts in social network
CN114332680A (en) Image processing method, video searching method, image processing device, video searching device, computer equipment and storage medium
CN111339784B (en) Automatic new topic mining method and system
CN111582932A (en) Inter-scene information pushing method and device, computer equipment and storage medium
US11238124B2 (en) Search optimization based on relevant-parameter selection
WO2018010147A1 (en) User feed with professional and nonprofessional content
CN111597336B (en) Training text processing method and device, electronic equipment and readable storage medium
CN111368081A (en) Method and system for determining selected text content
CN116401466B (en) Book classification recommendation method and system
CN111291829A (en) Automatic determination method and system for selected pictures
CN116910357A (en) Data processing method and related device
CN113327132A (en) Multimedia recommendation method, device, equipment and storage medium
CN113393303A (en) Article recommendation method, device, equipment and storage medium
CN113704617A (en) Article recommendation method, system, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200616