CN111768412A - Intelligent map matching method and device - Google Patents

Intelligent map matching method and device Download PDF

Info

Publication number
CN111768412A
CN111768412A CN201910968256.4A CN201910968256A CN111768412A CN 111768412 A CN111768412 A CN 111768412A CN 201910968256 A CN201910968256 A CN 201910968256A CN 111768412 A CN111768412 A CN 111768412A
Authority
CN
China
Prior art keywords
image
article
information
category
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910968256.4A
Other languages
Chinese (zh)
Inventor
王曦晨
佘志东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910968256.4A priority Critical patent/CN111768412A/en
Publication of CN111768412A publication Critical patent/CN111768412A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The invention discloses an intelligent image matching method and device, and relates to the technical field of computers. One embodiment of the method comprises: acquiring a target image containing an article, and zooming the target image; wherein the item is an item described by the information; cutting the zoomed target image for multiple times to obtain multiple cut images; and judging whether each cutting image contains the article and is clear, and if so, taking the clear cutting image containing the article as the matching image of the information. This embodiment reduces the time consumed for providing a map for information.

Description

Intelligent map matching method and device
Technical Field
The invention relates to the technical field of computers, in particular to an intelligent image matching method and device.
Background
Currently, the existing process of providing a map for information includes: the method comprises the steps of obtaining a target image containing an article, directly cutting the target image to obtain a huge number of cut images, and selecting a matching image of information from the huge number of cut images.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
the target image needs to be cut for many times, the number of cut images is huge, and the time consumed for obtaining the matching of the information is long. Therefore, the prior art has the problem that the time consumed for providing the matching picture for the information is long.
Disclosure of Invention
In view of this, embodiments of the present invention provide an intelligent map matching method and apparatus, which can shorten the time consumed for providing a map for information.
To achieve the above object, according to an aspect of an embodiment of the present invention, an intelligent mapping method is provided.
The intelligent image matching method of the embodiment of the invention comprises the following steps:
acquiring a target image containing an article, and zooming the target image; wherein the item is an item described by the information;
cutting the zoomed target image for multiple times to obtain multiple cut images;
and judging whether each cutting image contains the article and is clear, and if so, taking the clear cutting image containing the article as the matching image of the information.
In one embodiment, scaling the target image comprises:
scaling the target image in equal proportion to obtain a target scaled image, so that the first edge of the target scaled image is the same as the first edge of the preset window;
judging whether a second edge of the target zoomed image is larger than a second edge of the preset window or not, if so, taking the target zoomed image as the zoomed target image;
wherein the first edge is adjacent to the second edge; the preset window is used for cutting the zoomed target image for multiple times.
In one embodiment, before scaling the target image, further comprising: setting a surrounding frame for the object on the target image, so that the object is in the surrounding frame, wherein the surrounding frame is in a regular shape;
judging whether each cutting image contains the article and is clear, if so, taking the clear cutting image containing the article as a matching image of the information, and the method comprises the following steps:
and for each cutting image, judging whether the ratio of the area of the surrounding frame in the cutting image to the area of the surrounding frame in the zoomed target image is larger than a preset value, if so, judging whether the cutting image is clear according to a definition model, if so, deleting the surrounding frame, and taking the cutting image with the surrounding frame deleted as a matching picture of the information.
In one embodiment, acquiring a target image containing an item includes:
acquiring a plurality of original images of the information according to the title of the information and the category to which the article described by the information belongs;
and screening out the target images containing the articles from the plurality of original images according to the categories of the articles.
In one embodiment, obtaining a plurality of original images of the information according to the title of the information and the category to which the item described by the information belongs comprises:
obtaining the title of the information and the name of the category to which the article described by the information belongs, and performing word segmentation on the title of the information to obtain a word set;
judging the similarity between the word set and the name of the category to which the article belongs; if any word in the word set is not similar to the name of the category to which the article belongs, taking the name of the category to which the article belongs as a search word; otherwise, obtaining a keyword according to the word set, and taking the keyword and the name of the category to which the article belongs as the search word;
and searching on a search engine based on the search word to obtain a plurality of item numbers of the information, and acquiring a plurality of original images of the information according to the plurality of item numbers of the information.
In one embodiment, a target image containing an item, selected from a plurality of raw images according to a category to which the item belongs, comprises:
for each category, obtaining an article identification model of the category according to the following method:
taking the name of the category as a search word, and searching on a search engine to obtain a plurality of article numbers of the category; acquiring a plurality of item detail images of the category according to the plurality of item numbers of the category; labeling the item detail images with labels, and training an image detection model by using the labeled item detail images to obtain the item identification models of the categories;
acquiring the article identification models of the categories to which the articles belong from the article identification models of all the categories according to the categories to which the articles belong;
and screening out a target image containing the article from the plurality of original images based on the article identification model of the category to which the article belongs.
To achieve the above object, according to another aspect of the embodiments of the present invention, an intelligent mapping apparatus is provided.
The intelligent picture matching device of the embodiment of the invention comprises:
the processing unit is used for acquiring a target image containing an article and zooming the target image; wherein the item is an item described by the information;
the cutting unit is used for cutting the zoomed target image for multiple times to obtain multiple cut images;
and the judging unit is used for judging whether each cutting image contains the article and is clear, and if so, the clear cutting image containing the article is used as the matching image of the information.
In one embodiment, the processing unit is to:
scaling the target image in equal proportion to obtain a target scaled image, so that the first edge of the target scaled image is the same as the first edge of the preset window;
judging whether a second edge of the target zoomed image is larger than a second edge of the preset window or not, if so, taking the target zoomed image as the zoomed target image;
wherein the first edge is adjacent to the second edge; the preset window is used for cutting the zoomed target image for multiple times.
In one embodiment, the processing unit is to:
before zooming the target image, setting a bounding box for the article on the target image, so that the article is in the bounding box, wherein the bounding box is in a regular shape;
the judgment unit is used for:
and for each cutting image, judging whether the ratio of the area of the surrounding frame in the cutting image to the area of the surrounding frame in the zoomed target image is larger than a preset value, if so, judging whether the cutting image is clear according to a definition model, if so, deleting the surrounding frame, and taking the cutting image with the surrounding frame deleted as a matching picture of the information.
In one embodiment, the processing unit is to:
acquiring a plurality of original images of the information according to the title of the information and the category to which the article described by the information belongs;
and screening out the target images containing the articles from the plurality of original images according to the categories of the articles.
In one embodiment, the processing unit is to:
obtaining the title of the information and the name of the category to which the article described by the information belongs, and performing word segmentation on the title of the information to obtain a word set;
judging the similarity between the word set and the name of the category to which the article belongs; if any word in the word set is not similar to the name of the category to which the article belongs, taking the name of the category to which the article belongs as a search word; otherwise, obtaining a keyword according to the word set, and taking the keyword and the name of the category to which the article belongs as the search word;
and searching on a search engine based on the search word to obtain a plurality of item numbers of the information, and acquiring a plurality of original images of the information according to the plurality of item numbers of the information.
In one embodiment, the processing unit is to:
for each category, obtaining an article identification model of the category according to the following method:
taking the name of the category as a search word, and searching on a search engine to obtain a plurality of article numbers of the category; acquiring a plurality of item detail images of the category according to the plurality of item numbers of the category; labeling the item detail images with labels, and training an image detection model by using the labeled item detail images to obtain the item identification models of the categories;
acquiring the article identification models of the categories to which the articles belong from the article identification models of all the categories according to the categories to which the articles belong;
and screening out a target image containing the article from the plurality of original images based on the article identification model of the category to which the article belongs.
To achieve the above object, according to still another aspect of an embodiment of the present invention, there is provided an electronic apparatus.
An electronic device of an embodiment of the present invention includes: one or more processors; the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors realize the intelligent matching method provided by the embodiment of the invention.
To achieve the above object, according to still another aspect of an embodiment of the present invention, there is provided a computer-readable medium.
The computer readable medium of the embodiment of the invention stores a computer program thereon, and the program is executed by a processor to realize the intelligent matching method provided by the embodiment of the invention.
One embodiment of the above invention has the following advantages or benefits: acquiring a target image containing an article, and zooming the target image; wherein the item is the item described by the information; cutting the zoomed target image for multiple times to obtain multiple cut images; and judging whether each cutting image contains articles and is clear, and if so, taking the cutting image containing the articles and being clear as a matching image of information. The cutting is carried out after zooming, the number of times of cutting is reduced, the number of cutting images is reduced, and clear cutting images containing articles can be obtained more quickly, so that the time consumed for providing information and matching images is shortened.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of a main flow of an intelligent map matching method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a main flow of an intelligent map matching method according to another embodiment of the invention;
FIG. 3 is a schematic diagram of the main elements of an intelligent mapping apparatus according to an embodiment of the present invention;
FIG. 4 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 5 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
A large number of articles require information (e.g., documentation) to introduce advantages and disadvantages of the articles, etc., for reference by the user. However, the editing of a large amount of information takes a lot of time and economic cost, and especially it takes much time to provide a map for the information.
Currently, there are three prior art schemes for providing mapping for information, as follows:
the first method comprises the following steps: if the specification and style of the article (the specification or style of the article is the specification or style of the cake skirt, the V-neck skirt or the flower skirt or the like) are completely filled, the target image containing the article can be directly found by using the specification, the style or the like, but the filling is not complete, the target image containing the article cannot be found by using the specification and the style, or the target image is very few, even the image not containing the article is taken as the target image, so that the intelligent matching accuracy is not high.
And the second method comprises the following steps: the method comprises the steps of obtaining a target image containing an article, directly cutting the target image to obtain a huge number of cut images, and selecting a matching image of information from the huge number of cut images. The target image needs to be cut for many times, the number of cut images is huge, and the time consumed for obtaining the matching of the information is long. Therefore, there is a problem in that it takes a long time to provide matching images for information.
And the third is that: manually determining search terms according to the information, searching, storing the searched images, manually checking each image, marking the articles in the images, and manually cutting according to the marked images to obtain the matching images of the information. Because the manual work provides the matching picture for the information, the workload of the staff is large, the time is long, and errors are easy to occur.
In order to solve the problems in the prior art, an embodiment of the present invention provides an intelligent image matching method, as shown in fig. 1, the method includes:
s101, acquiring a target image containing an article, and zooming the target image; wherein the item is an item described by the information.
In this step, the article may be a mobile phone, a computer, a medical instrument, a mineral water bottle, or the like. The information can be scientific articles, articles introducing the mobile phone or short publicity texts.
And zooming comprises reducing or amplifying, if the target image is larger than the preset window, reducing the target image, and if the target image is smaller than the preset window, amplifying the target image. And the preset window is used for cutting the zoomed target image for multiple times. The process of scaling the target image is described in detail below, and is not described herein again.
And S102, cutting the zoomed target image for multiple times to obtain multiple cut images.
In the step, in specific implementation, the zoomed target image is cut for multiple times by using a preset window, so as to obtain multiple cut images. It should be understood that the preset window may be rectangular, square, diamond, circular, or the like.
It should be noted that the longest side of the preset window is parallel to or perpendicular to the longest side of the zoomed target image. It should be understood that, when the longest side of the preset window is parallel to the longest side of the zoomed target image, the zoomed target image is cut for multiple times, and the obtained cut image is easier to be used as a matching map of information, and the time consumed for obtaining the matching map is shorter.
Step S103, judging whether each cutting image contains the article and is clear, and if so, taking the clear cutting image containing the article as a matching image of the information.
In an embodiment of the present invention, scaling the target image comprises:
scaling the target image in equal proportion to obtain a target scaled image, so that the first edge of the target scaled image is the same as the first edge of the preset window;
judging whether a second edge of the target zoomed image is larger than a second edge of the preset window or not, if so, taking the target zoomed image as the zoomed target image;
wherein the first edge is adjacent to the second edge; the preset window is used for cutting the zoomed target image for multiple times.
In this embodiment, if the first side is a long side, the second side is a short side, and if the first side is a short side, the second side is a long side. Preferably, in the case where the first side is a long side and the second side is a short side, the time taken to obtain the matching is shorter. In addition, if the second edge of the target zoomed image is not larger than the second edge of the preset window, the target zoomed image is not taken as the zoomed target image, and the matching of the information cannot be obtained by using the target zoomed image; wherein not more means not more than.
In the embodiment, the target image is scaled in an equal proportion, so that the object contained in the target image is prevented from deforming, and the matching is ensured not to be distorted. The first edge of the target zooming image is the same as the first edge of the preset window, and whether the target zooming image is suitable for cutting or not is directly determined by comparing the size of the second edge of the target zooming image with the size of the second edge of the preset window, so that the time consumed for providing matching pictures for information is further shortened.
In an embodiment of the present invention, before scaling the target image, the method further includes: setting a surrounding frame for the object on the target image, so that the object is in the surrounding frame, wherein the surrounding frame is in a regular shape;
step S103 may include:
and for each cutting image, judging whether the ratio of the area of the surrounding frame in the cutting image to the area of the surrounding frame in the zoomed target image is larger than a preset value, if so, judging whether the cutting image is clear according to a definition model, if so, deleting the surrounding frame, and taking the cutting image with the surrounding frame deleted as a matching picture of the information.
In this embodiment, the bounding box may be arranged as a rectangle, a diamond, a circle, or the like. In addition, the preset value is set according to the requirement, for example, 2/3. The method for calculating the area of the bounding box in the cut image is the same as the method for calculating the area of the bounding box in the zoomed target image, and the method in the prior art can be adopted, for example, a matrix laboratory (MATLAB for short). Further, since the bounding box is set for the article on the target image before the target image is zoomed, the bounding box is also scaled equally after the target image is zoomed, and the scaling of the bounding box is the same as the scaling of the target image.
It should be noted that, because the target image is zoomed, in order to ensure the accuracy of matching the image, it is determined whether the cut image is clear or not under the condition that the zoom ratio is too large, the zoom ratio is too small, or the definition of the target image is problematic.
In specific implementation, as shown in fig. 2, the process of obtaining the sharpness model includes: a large number of images are obtained from an existing Content Delivery Network (cdn for short) through a crawler technology, the definition of each image is judged manually, the image with high definition is used as a positive sample, and the image with low definition is used as a negative sample. If the negative samples are insufficient, the image with high definition can be amplified by high times, processed by a common image blurring algorithm (such as a Gaussian blurring algorithm) or processed by a mosaic algorithm, so that the negative sample amount is increased. With blur (label 0) and sharpness (label 1) as classification targets, a classification network model (e.g., 18-layer convolutional neural network (resnet 18), 50-layer convolutional neural network (resnet 50), and convolutional image detection model (vgg 16)) is trained with positive and negative samples to obtain a sharpness model. The training process may be performed on a graphics processor (gpu for short) or cpu.
In the embodiment, whether the ratio of the area of the enclosing frame in the cut image to the area of the enclosing frame in the zoomed target image is larger than a preset value or not is judged, whether the cut image is clear or not is judged, whether the cut image is suitable for being used as a matching image or not is determined through two judgments, and therefore time consumed for providing the matching image for information is further shortened. Because the article is generally irregular in shape, the article is in the enclosing frame, the enclosing frame is regular in shape, the area of the enclosing frame is calculated, compared with the area of the article, the calculation amount is smaller, the calculation time is shorter, and the time consumed for providing matching pictures for information is further shortened.
In an embodiment of the present invention, acquiring a target image including an article includes:
acquiring a plurality of original images of the information according to the title of the information and the category to which the article described by the information belongs;
and screening out the target images containing the articles from the plurality of original images according to the categories of the articles.
In this embodiment, it should be understood that there may be one or more screened target images, but the processing manner for each target image is the same, and the target images are scaled; cutting the zoomed target image for multiple times to obtain multiple cut images; and judging whether each cutting image contains the article and is clear, and if so, taking the clear cutting image containing the article as the matching image of the information.
In the embodiment, a plurality of original images of the information are automatically acquired through the title of the information and the category to which the article described by the information belongs, and because more data are recorded in the title of the information, a plurality of original images are acquired by using the title, so that the accuracy and the automation degree of intelligent image matching are improved. The target images containing the articles are screened out from the plurality of original images according to the categories of the articles, so that the target images are automatically obtained, the workload of workers is reduced, and errors are reduced.
In the embodiment of the present invention, acquiring a plurality of original images of the information according to the title of the information and the category to which the article described by the information belongs includes:
and obtaining the title of the information and the name of the category to which the article described by the information belongs, and performing word segmentation on the title of the information to obtain a word set.
In this step, the process of obtaining the word set is described as follows as a specific example: for an article, the title of the article is analyzed by a natural language processing system (e.g., a Sebash word, Stanford, etc.) to obtain the style of the article described by the article and the suitability of the population, e.g., the title of the article: the fruity cleaning toothpaste opens the tooth protection mode of the eruption baby. Obviously, children are suitable people, the fruity flavor is style, and toothpaste is the article described. The vocabulary set includes children, fruit flavors and toothpaste.
In addition, the title of the information and the name of the category to which the item described by the information belongs may be obtained from the information.
Judging the similarity between the word set and the name of the category to which the article belongs; if any word in the word set is not similar to the name of the category to which the article belongs, taking the name of the category to which the article belongs as a search word; otherwise, obtaining a keyword according to the word set, and taking the keyword and the name of the category to which the article belongs as the search word.
In the step, when the method is implemented specifically, the similarity between each word in the word set and the name of the category to which the article belongs is calculated by using the existing Edit Distance algorithm (Edit Distance); if the similarity between any word in the word set and the name of the category to which the article belongs is 0, taking the name of the category to which the article belongs as a search word; and if the similarity between each word in the word set and the name of the category to which the article belongs is not 0, replacing each word in the word set by a preset word corresponding to each word, removing stop words to obtain a keyword, and taking the keyword and the name of the category to which the article belongs as search words.
The following describes a process of obtaining search terms when the similarity is not 0: the word set comprises the Gemini, the fruity flavor, the high popularity and the toothpaste, the preset words corresponding to the Gemini are children, the fruity flavor, the high popularity and the toothpaste do not have the preset words, and the high popularity is stop words, so that the names of the categories of the children, the fruity flavor, the toothpaste and the objects are used as search words.
It should be understood that if the similarity between any word in the word set and the name of the category to which the item belongs is 0, the style, specification, and the like of the item described by the information cannot be acquired from the title of the information, and therefore, the name of the category to which the item belongs is taken as the search word. In addition, the keyword and the name of the category to which the article belongs are used as search words for searching, and the obtained original images are more likely to contain the article.
And searching on a search engine based on the search word to obtain a plurality of item numbers of the information, and acquiring a plurality of original images of the information according to the plurality of item numbers of the information.
In this embodiment, a word set is obtained by a title of information, a search word is obtained according to similarity between the word set and a name of a category to which an article belongs, and searching is performed based on the search word, so that a plurality of original images of the information are automatically obtained. Because the title of the information records more data (such as specification or style, etc.), the original images obtained by the title are numerous, thereby improving the accuracy and automation degree of intelligent image matching. A plurality of item numbers of the information are obtained by searching the search words on the search engine, so that a plurality of original images of the information are obtained, the original images are more likely to contain the items through the item numbers, and the accuracy of intelligent image matching is further improved.
In an embodiment of the present invention, a target image including an article, which is selected from a plurality of original images according to a category to which the article belongs, includes:
for each category, obtaining an article identification model of the category according to the following method:
taking the name of the category as a search word, and searching on a search engine to obtain a plurality of article numbers of the category; acquiring a plurality of item detail images of the category according to the plurality of item numbers of the category; labeling the item detail images with labels, and training an image detection model by using the labeled item detail images to obtain the item identification models of the categories;
acquiring the article identification models of the categories to which the articles belong from the article identification models of all the categories according to the categories to which the articles belong;
and screening out a target image containing the article from the plurality of original images based on the article identification model of the category to which the article belongs.
In this embodiment, in order to ensure the accuracy of the item identification model, a special pattern (e.g., hole-breaking jeans, short casual pants) may be added to the item pattern, so that the item number is increased. The number of the article numbers is not less than 1000, and of course, the minimum value of the number of the article numbers can be set according to requirements. The item number may be sku, or may be an item specification or an item model. In addition, the item detail image may or may not be the same as the original image.
And one sku corresponds to one sku id, and a plurality of sku ids of the category are obtained through searching. And acquiring an article detail picture corresponding to each sku id of the category from a database (the database stores the sku id and the article detail picture corresponding to the sku id in a matching way) according to each sku id of the category in a form of a crawler technology or a calling interface. It should be understood that one or more item detail images may be provided for one sku id. Taking all the item detail images of the category as a set M, randomly extracting g (the number of g can be set according to requirements, specifically, any positive integer from 5% to 10% of the total number of the item detail images of the category) item detail images from the set M, and displaying the g item detail images to an image processor. The image processor marks g article detail images according to the name of the category and the id of the category, and marks articles corresponding to the category in each article detail image (the correspondence between the articles and the category is described below by a specific example, the category is sports shoes, and the articles are light and thin sports shoes, high-elasticity sports shoes, lacing sports shoes and the like). An image detection model (for example, fast Rcnn (baseline algorithm in the field of object detection), SSD (single-stage method improved on the basis of YOLO), YOLO (a new object detection method), and the like) is trained by using the labeled item detail image, so as to obtain the item identification model for the category.
And storing the item identification model of the category to a local disk or a cloud remote system, recording a storage path, and forming a key-value by storing the id and the storage path of the category in a database or other storage media. The object identification model of the object category is found by using the object id of the object category, so that the target image containing the object is screened out. It will be appreciated that the random extraction of g item detail images from the set M is to reduce the workload on the image processing personnel.
It should be noted that, if the category name only includes a category name of a level one, the image processor cannot mark the item detail image according to the category name of the level one, and the image processor must mark the item detail image according to the category name of the level one and the category id. In addition, in specific implementation, the names of the categories to which the article belongs may include the names of the primary categories, the names of the secondary categories, and the names of the tertiary categories.
And for each original image, judging whether the original image contains the article by using an article identification model of the category to which the article belongs, and screening the original image into a target image if the original image contains the article. This is because the original image may not contain the article for various reasons, and the matching of the information cannot be obtained if the original image does not contain the article.
In this embodiment, the image detection model is trained with the labeled item detail images, resulting in an item identification model for each category. The object identification models of the categories to which the objects belong are obtained from the object identification models of all the categories according to the categories to which the objects belong, so that the object images including the objects can be automatically screened out from the original images, the workload of workers is reduced, errors are reduced, the time consumed for providing matching pictures for the information is further shortened, and the matching pictures for the information are automatically provided.
In order to solve the problems in the prior art, another embodiment of the present invention provides an intelligent image matching method, including:
firstly, inputting the title of the scientific and technological paper into the ending part word to obtain the word set of the scientific and technological paper. The article described in said scientific paper is a time relay, and the name of the category to which the time relay belongs is thus a time relay. And calculating the similarity between each word in the word set of the scientific and technological paper and the time relay, if the similarity between any word in the word set of the scientific and technological paper and the time relay is 0, using the time relay as a search word, and otherwise, obtaining a keyword according to the word set of the scientific and technological paper, and using the keyword and the time relay as the search word.
And secondly, searching on a search engine based on the search words to obtain a plurality of specification models of the time relay, and obtaining url corresponding to the original images of the time relay with the specification models according to the specification models of the time relay.
Third, for each url, performing the following operations on the url:
1, downloading an original image corresponding to the url, judging whether the original image contains a time relay by using an article identification model of the time relay, if not, finishing, if so, setting a surrounding frame for the time relay on the original image, enabling the time relay to be in the surrounding frame, wherein the surrounding frame is in a regular shape, and executing 2.
And 2, scaling the original image in an equal proportion to obtain a scaled image, enabling the first edge of the scaled image to be the same as the first edge of the preset window, judging whether the second edge of the scaled image is larger than the second edge of the preset window, if not, ending, and if so, executing 3. Wherein the first edge is adjacent to the second edge.
And 3, cutting the zoomed image for multiple times by using a preset window to obtain multiple cut images.
And 4, judging whether the ratio of the area of the surrounding frame in the cutting image to the area of the surrounding frame in the zooming image is larger than 0.9 or not for each cutting image, and marking the cutting image as successful if the ratio is larger than 0.9.
And 5, judging whether the cutting image marked as success is clear or not according to the definition model, and if so, taking the cutting image marked as success and clear as a matching picture of the scientific and technological paper.
In order to solve the problems in the prior art, another embodiment of the present invention provides an intelligent image matching method. The embodiment is applied to the e-commerce field, and as shown in fig. 2, the method comprises the following steps:
firstly, for each category, carrying out duplication elimination processing on words representing specifications and styles in the category to obtain a word set of the category. And adding a word set of all categories into the final participle to improve the word segmentation capability of the final participle.
In the second step, word pairs of words representing age are arranged, and word pairs of words representing gender are arranged. For example, word pair 1: lovely baby is a child; word pair 2: girl is girl; word pair 3: male is considered to be damp. And saving the previous word in the word pair as a key. Configuring a stop word dictionary, wherein the stop words comprise: fashion, high popularity or color value, etc.
And thirdly, inputting the title of the article into the ending word to obtain a word set of the article. And acquiring the name of the category to which the article described by the article belongs through a E-commerce server according to the id of the category to which the article described by the article belongs. Calculating the similarity between the word set of the article and the name of the category to which the article belongs; if any word in the word set of the article is not similar to the name of the category to which the article belongs, taking the name of the category to which the article belongs as a search word; otherwise, the word set of the article is replaced by the word pair (the word is used in a unified way), stop words are removed by the stop word dictionary to obtain keywords, and the keywords and the names of the categories to which the articles belong are used as search words.
In this step, the id of the category to which the article belongs can be acquired from the E-commerce server through an interface of the E-commerce server, and can also be taken by the article. The e-commerce server classifies items, typically at three levels, e.g., women's dress, jacket, shirt. The e-commerce sets the name of the tertiary category and the id of the tertiary category for each item.
And fourthly, searching on a search engine based on the search words, so that 200 sku ids of the articles are obtained from the articles with the matching pictures in the E-commerce server through a search engine interface of the E-commerce server. And acquiring a plurality of urls through an interface of an E-commerce server or a crawler technology according to the 200 sku ids, wherein the urls correspond to the original images of the articles, and the urls are subjected to duplicate removal processing and stored in a preset position.
Fifthly, if no url exists in the preset position or the number of urls is smaller than K (K is preset and can be set to 300), indicating that a matching graph cannot be provided for the article, and ending; otherwise, judging whether the category to which the article belongs has an article identification model, if so, executing the sixth step, otherwise, indicating that the article cannot be processed at present, providing a matching picture for the article, and ending.
Sixthly, for each url of the preset position, performing the following operations on the url:
a, downloading an original image corresponding to the url, judging whether the original image contains the article according to an article identification model of the class to which the article belongs, if not, finishing, if so, setting a rectangular surrounding frame for the article on the original image, enabling the article to be in the rectangular surrounding frame, recording a coordinate X of the rectangular surrounding frame relative to the original image by adopting the existing recording technology, and then executing b.
And b, scaling the original image in equal proportion to obtain a scaled image, wherein the width and the length of the scaled image are t _ w and t _ h respectively, the larger value of the scaled image is selected from the t _ w and the t _ h to be used as t _ max, and the other scaled image is selected to be used as t _ low. And d, enabling the t _ max to be the same as the length of the long side of the preset window, judging whether the t _ low is larger than the length of the short side of the preset window, if not, finishing, and if so, executing c. Wherein the long side is adjacent to the short side, and the length of the long side is greater than that of the short side.
And c, cutting the zoomed image for multiple times by using a preset window according to the cutting method in the prior art to obtain multiple cut images.
And d, judging whether the ratio of the area of the rectangular surrounding frame in the cut image to the area of the rectangular surrounding frame in the zoomed image (calculating the area of the rectangular surrounding frame in the zoomed image through the coordinate X and the zoom ratio of the zoomed image) is larger than 0.8 or not for each cut image, if so, marking the cut image as successful (success), if not, marking the cut image as failed (negative), if so, indicating that the rectangular surrounding frame does not appear in the cut image, and marking the cut image as ignored (ignore).
And e, judging whether the cutting image marked as success is clear or not according to the definition model, and if so, taking the cutting image marked as success and clear as the matching image of the article.
In this step, in a specific implementation, the cut image marked as successful and clear is stored in a format such as jpeg or png, the cut image in the format of jpeg or png is used as the matching image of the article, and the matching image of the article is uploaded to an image storage server, for example, a CDN. And obtaining the url of the matching graph of the article, and storing the url of the matching graph of the article.
It should be noted that an article id (referred to as "article _ id") may be set for each article, and the article id is used to distinguish different articles. It should be understood that for each article, the process of providing a map for each article is the process described in the third step through the sixth step. In addition, if the id of the class to which the article described in one article belongs is the same as the id of the class to which the article described in another article belongs, and the title of the article is the same as the title of the other article, the matching pictures of the two articles are the same, and the matching picture of one article can be obtained.
In order to solve the problems in the prior art, an embodiment of the present invention provides an intelligent image matching apparatus, as shown in fig. 3, the apparatus includes:
a processing unit 301, configured to obtain a target image containing an article, and zoom the target image; wherein the item is an item described by the information.
A cutting unit 302, configured to cut the zoomed target image multiple times to obtain multiple cut images.
A determining unit 303, configured to determine whether each cut image includes the article and is clear, and if so, use the clear cut image including the article as the matching image of the information.
In this embodiment of the present invention, the processing unit 301 is configured to:
scaling the target image in equal proportion to obtain a target scaled image, so that the first edge of the target scaled image is the same as the first edge of the preset window;
judging whether a second edge of the target zoomed image is larger than a second edge of the preset window or not, if so, taking the target zoomed image as the zoomed target image;
wherein the first edge is adjacent to the second edge; the preset window is used for cutting the zoomed target image for multiple times.
In this embodiment of the present invention, the processing unit 301 is configured to:
before zooming the target image, setting a bounding box for the article on the target image, so that the article is in the bounding box, wherein the bounding box is in a regular shape;
the determining unit 303 is configured to:
and for each cutting image, judging whether the ratio of the area of the surrounding frame in the cutting image to the area of the surrounding frame in the zoomed target image is larger than a preset value, if so, judging whether the cutting image is clear according to a definition model, if so, deleting the surrounding frame, and taking the cutting image with the surrounding frame deleted as a matching picture of the information.
In this embodiment of the present invention, the processing unit 301 is configured to:
acquiring a plurality of original images of the information according to the title of the information and the category to which the article described by the information belongs;
and screening out the target images containing the articles from the plurality of original images according to the categories of the articles.
In this embodiment of the present invention, the processing unit 301 is configured to:
obtaining the title of the information and the name of the category to which the article described by the information belongs, and performing word segmentation on the title of the information to obtain a word set;
judging the similarity between the word set and the name of the category to which the article belongs; if any word in the word set is not similar to the name of the category to which the article belongs, taking the name of the category to which the article belongs as a search word; otherwise, obtaining a keyword according to the word set, and taking the keyword and the name of the category to which the article belongs as the search word;
and searching on a search engine based on the search word to obtain a plurality of item numbers of the information, and acquiring a plurality of original images of the information according to the plurality of item numbers of the information.
In this embodiment of the present invention, the processing unit 301 is configured to:
for each category, obtaining an article identification model of the category according to the following method:
taking the name of the category as a search word, and searching on a search engine to obtain a plurality of article numbers of the category; acquiring a plurality of item detail images of the category according to the plurality of item numbers of the category; labeling the item detail images with labels, and training an image detection model by using the labeled item detail images to obtain the item identification models of the categories;
acquiring the article identification models of the categories to which the articles belong from the article identification models of all the categories according to the categories to which the articles belong;
and screening out a target image containing the article from the plurality of original images based on the article identification model of the category to which the article belongs.
It should be understood that the functions performed by the components of the intelligent mapping apparatus provided in the embodiment of the present invention have been described in detail in the intelligent mapping method in the above embodiment, and are not described again here.
Fig. 4 illustrates an exemplary system architecture 400 to which the intelligent mapping method or the intelligent mapping apparatus of the embodiments of the present invention may be applied.
As shown in fig. 4, the system architecture 400 may include terminal devices 401, 402, 403, a network 404, and a server 405. The network 404 serves as a medium for providing communication links between the terminal devices 401, 402, 403 and the server 405. Network 404 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal devices 401, 402, 403 to interact with a server 405 over a network 404 to receive or send messages or the like. The terminal devices 401, 402, 403 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 401, 402, 403 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 405 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using the terminal devices 401, 402, 403. The backend management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (for example, target push information, product information — just an example) to the terminal device.
It should be noted that the intelligent mapping method provided by the embodiment of the present invention is generally executed by the server 405, and accordingly, the intelligent mapping apparatus is generally disposed in the server 405.
It should be understood that the number of terminal devices, networks, and servers in fig. 4 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 5, shown is a block diagram of a computer system 500 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 5, the computer system 500 includes a Central Processing Unit (CPU)501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the system 500 are also stored. The CPU 501, ROM 502, and RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 501.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a unit, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a processing unit, a cutting unit, and a judging unit. The names of the cells do not form a limitation on the cells themselves in some cases, for example, a cutting cell may also be described as a "cell that cuts the target image after zooming for a plurality of times to obtain a plurality of cut images".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: acquiring a target image containing an article, and zooming the target image; wherein the item is an item described by the information; cutting the zoomed target image for multiple times to obtain multiple cut images; and judging whether each cutting image contains the article and is clear, and if so, taking the clear cutting image containing the article as the matching image of the information.
According to the technical scheme of the embodiment of the invention, a target image containing an article is obtained, and the target image is zoomed; wherein the item is the item described by the information; cutting the zoomed target image for multiple times to obtain multiple cut images; and judging whether each cutting image contains articles and is clear, and if so, taking the cutting image containing the articles and being clear as a matching image of information. The cutting is carried out after zooming, the number of times of cutting is reduced, the number of cutting images is reduced, and clear cutting images containing articles can be obtained more quickly, so that the time consumed for providing information and matching images is shortened.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An intelligent matching method is characterized by comprising the following steps:
acquiring a target image containing an article, and zooming the target image; wherein the item is an item described by the information;
cutting the zoomed target image for multiple times to obtain multiple cut images;
and judging whether each cutting image contains the article and is clear, and if so, taking the clear cutting image containing the article as the matching image of the information.
2. The method of claim 1, wherein scaling the target image comprises:
scaling the target image in equal proportion to obtain a target scaled image, so that the first edge of the target scaled image is the same as the first edge of the preset window;
judging whether a second edge of the target zoomed image is larger than a second edge of the preset window or not, if so, taking the target zoomed image as the zoomed target image;
wherein the first edge is adjacent to the second edge; the preset window is used for cutting the zoomed target image for multiple times.
3. The method of claim 1, further comprising, prior to scaling the target image: setting a surrounding frame for the object on the target image, so that the object is in the surrounding frame, wherein the surrounding frame is in a regular shape;
judging whether each cutting image contains the article and is clear, if so, taking the clear cutting image containing the article as a matching image of the information, and the method comprises the following steps:
and for each cutting image, judging whether the ratio of the area of the surrounding frame in the cutting image to the area of the surrounding frame in the zoomed target image is larger than a preset value, if so, judging whether the cutting image is clear according to a definition model, if so, deleting the surrounding frame, and taking the cutting image with the surrounding frame deleted as a matching picture of the information.
4. The method of claim 1, wherein acquiring the target image containing the item comprises:
acquiring a plurality of original images of the information according to the title of the information and the category to which the article described by the information belongs;
and screening out the target images containing the articles from the plurality of original images according to the categories of the articles.
5. The method of claim 4, wherein obtaining a plurality of original images of the information according to the title of the information and the category to which the item described by the information belongs comprises:
obtaining the title of the information and the name of the category to which the article described by the information belongs, and performing word segmentation on the title of the information to obtain a word set;
judging the similarity between the word set and the name of the category to which the article belongs; if any word in the word set is not similar to the name of the category to which the article belongs, taking the name of the category to which the article belongs as a search word; otherwise, obtaining a keyword according to the word set, and taking the keyword and the name of the category to which the article belongs as the search word;
and searching on a search engine based on the search word to obtain a plurality of item numbers of the information, and acquiring a plurality of original images of the information according to the plurality of item numbers of the information.
6. The method of claim 5, wherein selecting from the plurality of raw images a target image containing an item according to the category to which the item belongs comprises:
for each category, obtaining an article identification model of the category according to the following method:
taking the name of the category as a search word, and searching on a search engine to obtain a plurality of article numbers of the category; acquiring a plurality of item detail images of the category according to the plurality of item numbers of the category; labeling the item detail images with labels, and training an image detection model by using the labeled item detail images to obtain the item identification models of the categories;
acquiring the article identification models of the categories to which the articles belong from the article identification models of all the categories according to the categories to which the articles belong;
and screening out a target image containing the article from the plurality of original images based on the article identification model of the category to which the article belongs.
7. An intelligent matching device, comprising:
the processing unit is used for acquiring a target image containing an article and zooming the target image; wherein the item is an item described by the information;
the cutting unit is used for cutting the zoomed target image for multiple times to obtain multiple cut images;
and the judging unit is used for judging whether each cutting image contains the article and is clear, and if so, the clear cutting image containing the article is used as the matching image of the information.
8. The apparatus of claim 7, wherein the processing unit is configured to:
scaling the target image in equal proportion to obtain a target scaled image, so that the first edge of the target scaled image is the same as the first edge of the preset window;
judging whether a second edge of the target zoomed image is larger than a second edge of the preset window or not, if so, taking the target zoomed image as the zoomed target image;
wherein the first edge is adjacent to the second edge; the preset window is used for cutting the zoomed target image for multiple times.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN201910968256.4A 2019-10-12 2019-10-12 Intelligent map matching method and device Pending CN111768412A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910968256.4A CN111768412A (en) 2019-10-12 2019-10-12 Intelligent map matching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910968256.4A CN111768412A (en) 2019-10-12 2019-10-12 Intelligent map matching method and device

Publications (1)

Publication Number Publication Date
CN111768412A true CN111768412A (en) 2020-10-13

Family

ID=72718410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910968256.4A Pending CN111768412A (en) 2019-10-12 2019-10-12 Intelligent map matching method and device

Country Status (1)

Country Link
CN (1) CN111768412A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102326383A (en) * 2009-02-20 2012-01-18 索尼爱立信移动通讯有限公司 Image capturing method, image capturing apparatus, and computer program
CN107958455A (en) * 2017-12-06 2018-04-24 百度在线网络技术(北京)有限公司 Image definition appraisal procedure, device, computer equipment and storage medium
CN108733779A (en) * 2018-05-04 2018-11-02 百度在线网络技术(北京)有限公司 The method and apparatus of text figure
CN109543058A (en) * 2018-11-23 2019-03-29 连尚(新昌)网络科技有限公司 For the method for detection image, electronic equipment and computer-readable medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102326383A (en) * 2009-02-20 2012-01-18 索尼爱立信移动通讯有限公司 Image capturing method, image capturing apparatus, and computer program
CN107958455A (en) * 2017-12-06 2018-04-24 百度在线网络技术(北京)有限公司 Image definition appraisal procedure, device, computer equipment and storage medium
CN108733779A (en) * 2018-05-04 2018-11-02 百度在线网络技术(北京)有限公司 The method and apparatus of text figure
CN109543058A (en) * 2018-11-23 2019-03-29 连尚(新昌)网络科技有限公司 For the method for detection image, electronic equipment and computer-readable medium

Similar Documents

Publication Publication Date Title
KR101511050B1 (en) Method, apparatus, system and computer program for offering and displaying a product information
CN108596940B (en) Video segmentation method and device
CN108416003A (en) A kind of picture classification method and device, terminal, storage medium
TW201411381A (en) Labeling Product Identifiers and Navigating Products
CN110020312B (en) Method and device for extracting webpage text
CN107193932B (en) Information pushing method and device
CN110633594A (en) Target detection method and device
CN112580637B (en) Text information identification method, text information extraction method, text information identification device, text information extraction device and text information extraction system
CN111767420A (en) Method and device for generating clothing matching data
CN111931859B (en) Multi-label image recognition method and device
CN114154013A (en) Video recommendation method, device, equipment and storage medium
CN106681598A (en) Information input method and device
CN111782841A (en) Image searching method, device, equipment and computer readable medium
CN115293332A (en) Method, device and equipment for training graph neural network and storage medium
CN106899755B (en) Information sharing method, information sharing device and terminal
CN111160410A (en) Object detection method and device
CN110910178A (en) Method and device for generating advertisement
US10963690B2 (en) Method for identifying main picture in web page
CN110807097A (en) Method and device for analyzing data
CN111782850A (en) Object searching method and device based on hand drawing
CN111368693A (en) Identification method and device for identity card information
CN111768412A (en) Intelligent map matching method and device
CN110827101A (en) Shop recommendation method and device
CN114282524A (en) Method, system and device for processing structured data of questionnaire information
CN114186147A (en) Data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination