CN109711399B - Shop identification method and device based on image and electronic equipment - Google Patents

Shop identification method and device based on image and electronic equipment Download PDF

Info

Publication number
CN109711399B
CN109711399B CN201811309281.3A CN201811309281A CN109711399B CN 109711399 B CN109711399 B CN 109711399B CN 201811309281 A CN201811309281 A CN 201811309281A CN 109711399 B CN109711399 B CN 109711399B
Authority
CN
China
Prior art keywords
image
sub
shop
determining
target plaque
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811309281.3A
Other languages
Chinese (zh)
Other versions
CN109711399A (en
Inventor
王博
李文哲
孔剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN201811309281.3A priority Critical patent/CN109711399B/en
Publication of CN109711399A publication Critical patent/CN109711399A/en
Application granted granted Critical
Publication of CN109711399B publication Critical patent/CN109711399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a shop recognition method based on images, belongs to the technical field of computers, and is used for solving the problem of low accuracy of shop recognition based on images. The shop identification method based on the image disclosed by the embodiment of the application comprises the following steps: acquiring global image characteristics of a target plaque image and acquiring regional saliency characteristics of the target plaque image; determining the image characteristics of the target plaque image according to the global image characteristics and the regional saliency characteristics; and determining the shops matched with the target plaque image according to the image characteristics of the target plaque image and the shop image characteristics of preset shops. The image characteristics adopted by the shop identification method based on the image disclosed by the embodiment of the application give consideration to the global characteristics and the local fine-grained regions of the image, so that the image characteristics are comprehensive and have the discrimination, and the shop identification accuracy based on the image can be improved.

Description

Shop identification method and device based on image and electronic equipment
Technical Field
The application relates to the technical field of computers, in particular to a shop identification method and device based on images and electronic equipment.
Background
In the prior art, when a store is searched through an image of a store tablet or the store in the image is identified, a feature index can be established for the whole signboard area, that is, the character information, the texture feature and the tablet content distribution information of the image are considered, so that the robustness is good. For example, chinese patent application No. CN104598885B, which uses Scale-invariant feature transform (SIFT) to detect or describe local features in an image, and combines HS feature components for image recognition. The method has better robustness for images with small visual angle transformation and small illumination scene change. However, the image features extracted in the above-mentioned shop identification method in the prior art are less robust to the identification of images with large variation in acquisition angle and illumination scene, and the feature expression of the shop plaque image is not clear and complete, so that the accuracy of shop identification is not high.
In summary, the store identification method based on images in the prior art at least has the problem of low identification accuracy.
Disclosure of Invention
The application provides a shop recognition method based on images, which is beneficial to improving the accuracy of shop recognition based on images.
In order to solve the above problem, in a first aspect, an embodiment of the present application provides an image-based store identification method, including:
acquiring global image characteristics of a target plaque image and acquiring regional saliency characteristics of the target plaque image;
the global image features and the regional saliency features determine image features of the target plaque image;
and determining the shops matched with the target plaque image according to the image characteristics of the target plaque image and the shop image characteristics of preset shops.
In a second aspect, an embodiment of the present application provides an image-based store recognition apparatus, including:
the characteristic acquisition module is used for acquiring the global image characteristic of the target plaque image and acquiring the regional saliency characteristic of the target plaque image;
the characteristic fusion module is used for determining the image characteristics of the target plaque image according to the global image characteristics and the regional saliency characteristics;
and the matching identification module is used for determining the shops matched with the target plaque image according to the image characteristics of the target plaque image and the shop image characteristics of preset shops.
In a third aspect, an embodiment of the present application further discloses an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the image-based store identification method according to the embodiment of the present application is implemented.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program that, when executed by a processor, performs the steps of image-based store identification disclosed in embodiments of the present application.
According to the shop identification method based on the images, the global image characteristics of the target plaque image are obtained, and the regional saliency characteristics of the target plaque image are obtained; determining the image characteristics of the target plaque image according to the global image characteristics and the regional saliency characteristics; and determining the shops matched with the target plaque image according to the image characteristics of the target plaque image and the shop image characteristics of the preset shops, and being beneficial to solving the problem of low accuracy of shop identification based on images. The image-based store identification method disclosed by the embodiment of the application adopts the global image characteristics to reflect the appearance difference and the overall effect of the plaque, but ignores some distinguishing degrees but fine characteristics aiming at fine-grained identification, and the regional saliency characteristics are finer-grained local characteristics, so that the defect of the global image characteristics can be overcome.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of a store identification method based on images according to a first embodiment of the present application;
FIG. 2 is a flow chart of a store identification method based on images according to a second embodiment of the present application;
fig. 3 is a schematic structural view of a third embodiment of the present application showing a shop recognition apparatus based on an image;
fig. 4 is a second schematic structural view of an image-based store recognition apparatus according to a third embodiment of the present application;
fig. 5 is a third schematic structural diagram of a store recognition apparatus based on an image according to a third embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example one
As shown in fig. 1, the method for identifying a store based on an image disclosed in this embodiment includes: step 110 to step 130.
And 110, acquiring global image characteristics of the target plaque image and acquiring regional saliency characteristics of the target plaque image.
In an application scenario of performing store identification based on an image of a store tablet, a neural network model needs to be trained first to extract global image features of a target tablet image. In the embodiment of the present application, the global image feature is a high-dimensional feature used for characterizing a shape difference of a target plaque in the target plaque image, such as content distribution, global color distribution information, edge information, and the like of the target plaque image. In some embodiments of the present application, the global image features of the target plaque image may be extracted by pre-training a convolutional neural network model.
The regional saliency feature described in the embodiment of the present application is a display feature of each region divided in the target plaque image. The salient features are color distribution features extracted from salient regions in the images, and the salient regions are regions which can attract user interests and represent image contents most in the images. Although the region of interest of a person is very subjective, and different users may select different regions as the region of interest for the same image due to different knowledge backgrounds and specific business requirements of the person. However, due to the commonality of the human visual system and attention mechanism, some regions in an image will always be significantly appealing, and these regions will often contain rich information. Therefore, according to the characteristics of the human visual system, the salient region in the image can be approximately judged by using some bottom layer features of the image by using the general rule in the human cognition process, and the salient features are the color distribution features extracted from the salient region. In specific implementation, the regional saliency features of the image can be determined through the space distance and the color distance between the image regions obtained by image division.
And 120, determining the image characteristics of the target plaque image according to the global image characteristics and the regional saliency characteristics.
In some embodiments of the present application, a fusion feature in which the high-dimensional global image feature and the finer-grained sub-region saliency feature are fused may be obtained by stitching the global image feature and the sub-region saliency feature, and the obtained fusion feature may be determined as the image feature of the target plaque image.
And step 130, determining the shop matched with the target plaque image according to the image characteristics of the target plaque image and the shop image characteristics of a preset shop.
Then, the image characteristics are matched with the shop image characteristics of the preset shop. For example, similarity between image features of the target plaque image and store image features of preset stores is calculated, and then according to the similarity or a preset similarity threshold, a store corresponding to the similarity meeting a preset condition is determined as a store matched with the target plaque image.
According to the shop identification method based on the images, the global image characteristics of the target plaque image are obtained, and the regional saliency characteristics of the target plaque image are obtained; determining the image characteristics of the target plaque image according to the global image characteristics and the regional saliency characteristics; and determining the shops matched with the target plaque image according to the image characteristics of the target plaque image and the shop image characteristics of the preset shops, and being beneficial to solving the problem of low accuracy of shop identification based on images. The image-based store identification method disclosed by the embodiment of the application adopts the global image characteristics to reflect the appearance difference and the overall effect of the plaque, but ignores some distinguishing degrees but fine characteristics aiming at fine-grained identification, and the regional saliency characteristics are finer-grained local characteristics, so that the defect of the global image characteristics can be overcome.
Example two
As shown in fig. 2, the method for identifying a store based on an image according to the present embodiment includes: step 210 to step 270.
Step 210, training the neural network model.
In some embodiments of the present application, the Darknet53 network may be used as a basic network, and a Coupled clusterics Loss is used for comparison training to reduce the intra-class distance of the features, increase the inter-class feature difference, extract the image features of the plaque with discrimination, and finally, expand into a vector with a specified length (e.g. 1000) to train a neural network model. The training method of the neural network model is referred to in the prior art, and is not described in detail in this embodiment.
In some embodiments of the present application, further comprising: based on a YOLO object detection algorithm, a shop tablet detection model is trained, and a tablet image in an input image is obtained through the shop tablet detection model.
The core idea of yolo (young Only Look once) is to use the whole graph as the input of the network and directly return the position of the target frame and the category to which the target frame belongs on the output layer. YOLO is implemented by dividing an image into SxS grids and if the center of an object falls within the grid, the grid is responsible for predicting the object. B target frames are predicted for each grid, and each target frame is additionally predicted with a confidence value besides the position of the target frame returning to the target frame. The confidence value represents the confidence of the target contained in the predicted target frame and how much quasi-binary information the target frame predicts.
In specific implementation, a shop plaque detection model is constructed first. For example, the body structure employs the YOLO v3 algorithm.
The store plaque detection model is trained as follows.
First, global image features of the shop plaque image are extracted. In specific implementation, 9 groups of anchor values are obtained through clustering according to the image labeling files of the shop signboards, and high-dimensional characteristics of the images are extracted by utilizing a Darknet-53 basic network and are used as characteristic bases for positioning and classifying the signboards.
Then, a signboard target is predicted. Dividing image areas by adopting a grid division method, predicting different image areas, fusing different dimensional characteristics for prediction, predicting foreground and background, adopting 3 scales for the structure, predicting 3 signboard targets according to anchor clustering center distribution in each scale, wherein scale 1 is a basic network, and adding a convolutional layer to output prediction frame coordinates; the scale 2 is to sample from the convolution layer of the second last layer in the scale 1, add with the characteristic diagram of the last layer 16 x 16, output the prediction frame information after a plurality of convolutions again, and the scale 2 is two times larger than the scale 1; dimension 3 is similar to dimension 2, using a 32 x 32 size feature map.
Finally, the target signboard coordinates are regressed. Predicting a target area of the foreground through Anchor regression, and adopting a logistic regression mode, wherein a coordinate regression formula is as follows:
Figure BDA0001854563380000061
in the formula (b)x,by,bw,bh) For the final predicted frame center coordinate value and width and height (t)x,ty,tw,th) Prediction box for web learningC center point coordinate value and width and heightx、cyAs the horizontal and vertical coordinate offsets of the grid, pw、phThe side length of the anchor combination; σ is the formula for logistic stewart regression.
Step 220, determining the shop image characteristics of the preset shop, and constructing a shop database.
Wherein the preset shop refers to a shop contained in a shop database; the shop database at least comprises shop image characteristics of preset shops and shop information corresponding to each group of the shop image characteristics, wherein the shop information can comprise information such as shop names, geographic positions, plaque images and the like. The shop image features at least comprise global image features and regional saliency features of the shop.
In specific implementation, the global image feature of each preset shop in the shop data can be extracted through the neural network model trained in the previous steps.
For each preset store, acquiring regional saliency characteristics of the store, wherein the regional saliency characteristics comprise: dividing the plaque images of the stores according to at least one dividing method to determine a plurality of image areas; for each image area, segmenting the image area according to color distribution, and determining at least one sub-image area contained in the image area; for each image area, determining the regional saliency characteristics of the image area through the space distance and the color distance between sub-image areas contained in the image area; and combining the regional saliency characteristics of the image regions into the regional saliency characteristics of the shop.
Further, the determining the regional saliency characteristics of the image regions by the spatial distance and the color distance between the sub-image regions included in the image regions includes: determining the significance of each sub-image area according to the space distance and the color distance between the sub-image areas contained in the image area; and determining the regional saliency characteristics of the image regions to which the sub-image regions belong according to the saliency of the preset number of the sub-image regions with the highest saliency.
In some embodiments of the present application, the step of determining the saliency of each sub-image region from the spatial distance and the color distance between the sub-image regions included in the image region includes: by the formula
Figure BDA0001854563380000071
Determining each sub-image region r contained in the image region separatelykWherein r isiRepresents a difference from rkSub-image area of Ds(rk,ri) Representing a sub-image area rkAnd riSpatial distance of, σsRepresents the spatial distance weight, ω (r)i) R representing sub-image areaiWeight of (D)r(rk,ri) Is a sub-image region rkAnd riColor distance of (D)r(rk,ri) The calculation formula of (2) is as follows:
Figure BDA0001854563380000072
wherein, f (c)i,v) Representing the v-th color ci,vIn the ith sub-image region riAll of n in (1)iProbability of appearance in seed color, f (c)k,u) Representing the u-th color ck,uIn the k-th sub-image region rkAll of n in (1)kProbability of appearance in seed color, D (c)k,u,ci,v) Is a color ck,uAnd color ci,vThe distance measure of the colors in L a b space.
The technical scheme for determining the regional saliency characteristics of each preset store is the same as the technical scheme for determining the regional saliency characteristics of the target plaque in the store identification process, and for specific description, reference is made to the description for determining the regional saliency characteristics of the target plaque in the subsequent steps, and details are not repeated here.
In some embodiments of the present application, for each store, the store image features of the store may be obtained by stitching the global image features and the regional saliency features of the plaque image of the store. The method for acquiring the image characteristics of the shop is the same as the method for acquiring the image characteristics of the target plaque image, and the specific reference is made to the following description.
Step 230, obtaining global image characteristics of the target plaque image, and obtaining regional saliency characteristics of the target plaque image.
In some embodiments of the present application, if the image to be recognized includes a target plaque image and further includes other background information, for example, includes multiple plaque information or includes storefront information, the target plaque image in the input image is first obtained through the plaque detection model trained in the signing step, and then the global image feature of the target plaque image and the regional saliency feature of the target plaque image are further obtained.
In specific implementation, the obtaining of the global image feature of the target plaque image includes: and acquiring the global image characteristics of the target plaque image through a pre-trained neural network model. When the global image features of the target plaque image are obtained through the neural network model trained in the previous steps, the target plaque image is input into the neural network model, and output data of the designated feature expression layer of the neural network model are used as the global image features of the target plaque image. Because the neural network model fully considers the shape difference of the training sample images in the training process, such as the content distribution, the global color distribution information, the edge information and other information of the target plaque image, the output data of each feature expression layer of the neural network model is high-dimensional features capable of reflecting the whole information of the image.
In some embodiments of the present application, the obtaining of the regional saliency feature of the target plaque image comprises: dividing the target plaque image according to at least one division method to determine a plurality of image areas; for each image area, segmenting the image area according to color distribution, and determining at least one sub-image area contained in the image area; and for each image area, determining the regional saliency characteristics of the image area according to the spatial distance and the color distance between the sub-image areas contained in the image area.
For example, the target plaque image is trisected along the horizontal direction to obtain 3 image areas; trisecting the target plaque image along the vertical direction to obtain 3 image areas; dividing the target plaque image into six equal parts along the horizontal direction to obtain 6 image areas; and six equal parts of the target board image are vertically divided to obtain 6 image areas. According to the above method, 3+3+6+ 6-18 image regions can be determined.
Then, for each of the 18 image areas, the image area is further segmented according to the color distribution, and at least one sub-image area included in each image area is determined. For example, calculating the edge dissimilarity of each pixel point and an 8-neighborhood pixel point or a 4-neighborhood pixel point in each image region, namely the rgb channel color distance between each pixel point and the 8-neighborhood pixel point or the 4-neighborhood pixel point; then, the sequences are arranged from small to large according to the edge dissimilarity to obtain an edge sequence [ e ]1,e2…en](ii) a Then, combining the edges in the edge set in sequence; and determining the sub-image area included by the image area by combining the processed edges. The specific merging process may be: for the currently selected edge ejStarting from the next edge, the merging judgment is carried out, and the selected edge e is setjThe vertex connected is (v)i,vj) The two vertices do not belong to the same image area if the edge ejIs less than or equal to a set threshold value wTHWherein w isTH=min(Ci,Cj) I.e. the minimum value of dissimilarity within the image region to which the two vertices belong, i.e. wi,jContinuing to merge the next edge if the time is less than or equal to min, otherwise, updating the set threshold value of wTH=wi,j+1/Ci+CjAnd the vertex is labeled vjIs updated to viI.e. merging the two edges. If the edges which are not combined exist, the combination processing is continuously carried out on the next edge until all the edges are completely processed. Finally, according to the combined edgeEach image area is divided into at least one sub-image area.
In other embodiments of the present application, other methods in the prior art may also be adopted to divide each image region into at least one sub-image region, which is not illustrated in the present application.
Then, for each of the 18 image areas, determining the regional saliency features of the image areas according to the spatial distance and the color distance between the sub-image areas included in the image area.
In some embodiments of the present application, the determining the regional saliency feature of the image region according to the spatial distance and the color distance between the sub-image regions included in the image region includes: determining the significance of each sub-image area according to the space distance and the color distance between the sub-image areas contained in the image area; and determining the regional saliency characteristics of the image regions to which the sub-image regions belong according to the saliency of the preset number of the sub-image regions with the highest saliency.
In some embodiments of the present application, the step of determining the saliency of each sub-image region from the spatial distance and the color distance between the sub-image regions included in the image region includes: by the formula
Figure BDA0001854563380000091
Determining each sub-image region r contained in the image region separatelykWherein r isiRepresents a difference from rkSub-image area of Ds(rk,ri) Representing a sub-image area rkAnd riSpatial distance of, σsRepresents the spatial distance weight, ω (r)i) R representing sub-image areaiWeight of (D)r(rk,ri) Is a sub-image region rkAnd riColor distance of (D)r(rk,ri) The calculation formula of (2) is as follows:
Figure BDA0001854563380000092
wherein, f (c)i,v) Representing the v-th color ci,vIn the ith sub-image region riAll of n in (1)iProbability of appearance in seed color, f (c)k,u) Representing the u-th color ck,uIn the k-th sub-image region rkAll of n in (1)kProbability of appearance in seed color, D (c)k,u,ci,v) Is a color ck,uAnd color ci,vThe distance measure of the colors in L a b space. The specific calculation method of the color distance metric is the prior art, and this embodiment is not described again.
Assume that a certain image area includes 5 sub-image areas, respectively denoted as: r is1,r2,r3,r4,r5In 1, with rk=r1A method of determining a regional saliency feature of an image region is illustrated. First, sub-image regions r are calculated respectively1Spatial distance D from other sub-image regionss(r1,ri) And a color distance Dr(r1,ri) I is more than or equal to 2 and less than or equal to 4; then, by the formula
Figure BDA0001854563380000093
Determined sub-image region r1The significance of (a). In accordance with the above method, the sub-image regions r can be determined separately1,r2,r3,r4,r5The significance of (a). Finally, the image area is included in all sub-image areas, such as sub-image area r in this example1,r2,r3,r4,r5The saliency of the image region is spliced to obtain the regional saliency characteristic of the image region; or a sub-image area, such as sub-image area r in this example, which the image area comprises1,r2,r3,r4,r5Is a predetermined number of (e.g. r) of the maximum significance1,r2,r3The saliency of the image region) are spliced together to obtain the regional saliency features of the image region.
In specific implementation, the formula for calculating the significance of the sub-image areaIn, σsWeight strength, σ, for controlling spatial distancesThe larger the value is, the smaller the influence of the spatial distance weight is, so that the contrast of a farther area has a larger influence on the significance of the current area,
Figure BDA0001854563380000101
has a value interval of (0, 1)]. In the embodiments of the present application, it is preferable that,
Figure BDA0001854563380000102
the value is 0.4. Omega (r)i) Is a sub-image region riAnd a sub-image area rkThe color distance weight of (2) is usually an equal proportional number of the distance between the sub-image regions. There are still 5 sub-image regions r in the image region map1,r2,r3,r4,r5For example, if r1The Euclidean distances to other sub-image regions are respectively [20,30,10,40 ]]Then the color distance weight ω (r)i) Can be [0.2,0.3,0.1,0.4 ]]。
According to the method, the regional saliency characteristics of each image region can be determined.
And 240, determining the image characteristics of the target plaque image according to the global image characteristics and the regional saliency characteristics.
Further, determining the image feature of the target plaque image according to the global image feature and the regional saliency feature comprises: and splicing the global image features and the regional saliency features to obtain a result, and taking the result as the image features of the target plaque image. For example, the global image feature and the regional saliency feature may be spliced to obtain a fusion feature in which a high-dimensional global image feature and a finer-grained regional saliency feature are fused, and the obtained fusion feature is determined as the image feature of the target plaque image.
And 250, correcting the image characteristics of the target plaque image through a pre-trained K-means + + clustering model so as to reserve representative characteristics in the image characteristics.
In some embodiments of the present application, before determining a store matching the target plaque image according to the image feature of the target plaque image and a store image feature of a preset store, the method further includes: modifying the image features of the target plaque image through a pre-trained K-means + + clustering model to retain representative ones of the image features.
In specific implementation, the K-means + + clustering model is obtained by training according to store image features in a preset store database.
The method comprises the steps that a shop database comprises M shops, the shop image feature of each shop is composed of a global image feature with the size of 1000 and a regional saliency feature with the size of 1000, the shop image feature of each shop is feature data with the length of 2000, and data with the size of M shops of M x 2000 is used as input; the output is M cluster centers, and the size is M1000 characteristic data. And performing K-means + + clustering training according to the data to obtain a K-means + + clustering model. Wherein the corrected shop image features with the output size of M × 1000 are output. The specific training process is as follows:
the method comprises the steps of firstly, randomly selecting a point from an input data point set to serve as a first clustering center;
secondly, calculating the distance between each point in the data set and a clustering center;
thirdly, selecting a point with the largest distance as a new clustering center;
fourthly, repeatedly executing the second step and the third step until M clustering centers are selected;
fifthly, for the generated M cluster centers, assigning each data point to the nearest mass center to form M clusters;
sixthly, recalculating the mass center of each cluster;
and seventhly, repeatedly executing the fifth step and the sixth step until the cluster is not changed or the maximum iteration number is reached, and finishing the training of the K-means + + clustering model.
After the training of the K-means + + clustering model is completed, correcting the image features of the target plaque image, such as features with the size of 2000, through the K-means + + clustering model, and obtaining image features with the size of 1000.
In some embodiments of the present application, before determining a store matching the target plaque image according to the image feature of the target plaque image and a store image feature of a preset store, the method further includes: and determining the preset shop according to the appointed geographical position information.
In specific implementation, in order to narrow the comparison range and improve the comparison efficiency, the shops in the shop database may be initially screened based on the geographic position information, and the shop with the geographic position information matched with the geographic position information of the target plaque image is used as a preset shop.
And step 260, determining the shop matched with the target plaque image according to the image characteristics of the target plaque image and the shop image characteristics of a preset shop.
Through the processing of the steps, the image characteristics of the target plaque image and the store image characteristics of each store in the store database have the same size, and the image characteristics can be matched by directly calculating the similarity between the image characteristics of the target plaque image and the store image characteristics of each store. In specific implementation, the shops can be arranged in the order of similarity from high to low so as to output the shops matched with the target plaque image.
And 270, respectively updating the image characteristics of the target plaque image and the shop image characteristics of the shops matched with the target plaque image through the shop image characteristics of the similar shops, and re-matching according to the updated result.
In some embodiments of the present application, after determining a store matching the target plaque image according to the image feature of the target plaque image and a store image feature of a preset store, the method further includes: updating the image features of the target plaque image through a preset first number of the store image feature average values with the highest matching degree in stores matched with the target plaque image; respectively determining a preset second number of shops with the highest matching degree in the shops matched with the target plaque image and similar shops in the preset shops; for a preset second number of shops, updating the shop image characteristics of the shops through the average value of the shop image characteristics of the similar shops meeting preset conditions in the similar shops of the shops; determining a reordering distance between the target plaque image and the preset shop according to the updated image characteristics of the target plaque image and the updated shop image characteristics of the preset shop; and re-determining the shops matched with the target plaque image according to the re-sequencing distance.
In a specific implementation, the shop image features of the shop may be updated first, and then the image features of the target plaque image may be updated, and the order of updating the image features of the target plaque image and the shop image features of the shop is not limited in the present application.
In some embodiments of the present application, assuming that the image feature of the target plaque image is que, 100 shops are included in the shop database, and the matching result of the target plaque image and a preset shop is: [ reg ]1,reg2…reg100]Wherein reg1,reg2…reg100The store image characteristics of the stores in the database are represented. The image feature of the target plaque image may be updated by the average value of the store image features of a first preset number of stores that have the highest degree of matching among the stores that match the target plaque image. For example, the image feature que of the target plaque image is updated to que' ═ reg1+…+reg4/4。
Similarly, first, a preset second number (e.g., 3) of the shops (e.g., shops 1, 2, 3) with the highest matching degree among the shops matched with the target plaque image are determined, respectively, and similar shops (except for themselves) in the shop database. Assume that similar stores of store 1 include stores 7, 8, …, 85, similar stores of store 2 include stores 3, 4, …, 69, and similar stores of store 3 include stores 4, 6, …And 67. Then, the store image features of the 3 stores 1, 2, and 3 are updated by the average value of the store image features of the similar stores satisfying the preset condition (for example, 4 stores having the highest similarity) among the respective similar stores. Specifically, in the present embodiment, the store image feature of the store 1 may be updated to reg'1=reg7+…+reg24/4, the store image feature of the store 2 may be updated to reg'2=reg3+…+reg24/4, the store image feature of the store 3 may be updated to reg'3=reg4+…+reg24/4。
After the update, the shop image characteristics of the shop in the shop database are updated as follows: [ reg'1,reg‘2,reg‘3…reg100]。
And finally, determining the reordering distance between the target plaque image and the preset shop according to the updated image characteristics of the target plaque image and the updated shop image characteristics of the preset shop.
In some embodiments of the present application, the determining a reordering distance of the target plaque image from the preset store according to the updated image feature of the target plaque image and the updated store image feature of the preset store comprises:
by the formula d*(p,gi)=(1-λ)dJ(p,gi)+λd(p,gi) Determining the plaque p corresponding to the target plaque image and the plaque g of the preset shopiA reordering distance therebetween; wherein d isJ(p,gi) Indicating board p and board giThe Jacobsad distance between the two is calculated by the formula
Figure BDA0001854563380000131
In the formula (I), the compound is shown in the specification,
Figure BDA0001854563380000132
image features as board p and board gjNeighbor coding distance vectors of corresponding store image features; d (p, g)i) To representTablet p and giThe mahalanobis distance between the two is calculated by the formula
Figure BDA0001854563380000133
Figure BDA0001854563380000134
In the formula, xpImage features as plaque p and
Figure BDA0001854563380000135
is a tablet gjThe characteristics of the corresponding image of the shop,
Figure BDA0001854563380000136
an image feature covariance matrix; and lambda is a weight coefficient, and the value of lambda is determined according to specific service requirements.
For example, take xp=que‘,
Figure BDA0001854563380000137
Through the above formula, the reordering distance between the target plaque image and the store 1 in the store database can be calculated. By analogy, the reordering distance of the target plaque image and the stores 2, 3, …, 100 in the store database may be calculated. And then, sequencing and outputting the shops in the shop database according to the reordering distance to obtain a shop sequence matched with the target plaque image.
According to the shop identification method based on the images, the global image characteristics of the target plaque image are obtained, and the regional saliency characteristics of the target plaque image are obtained; determining the image characteristics of the target plaque image according to the global image characteristics and the regional saliency characteristics; the image features of the target plaque image are corrected through a pre-trained K-means + + clustering model so as to reserve representative features in the image features, and a shop matched with the target plaque image is determined according to the image features of the target plaque image and shop image features of preset shops, so that the problem of low accuracy of shop recognition based on images is solved. The image-based store identification method disclosed by the embodiment of the application adopts the global image characteristics to reflect the appearance difference and the overall effect of the plaque, but ignores some distinguishing degrees but fine characteristics aiming at fine-grained identification, and the regional saliency characteristics are finer-grained local characteristics, so that the defect of the global image characteristics can be overcome. Meanwhile, the image features are corrected through the K-means + + clustering model, representative characteristics in the image features can be reserved, redundant or unrepresentative characteristics are filtered out, and therefore storage efficiency of a shop database is improved, and image recognition efficiency is improved by reducing the size of the image features.
Further, the identification result obtained by only relying on the image features for preliminary matching is not good for the indistinguishable sample retrieval, for example, when the target plaque image is a "kendiry" plaque image, the result obtained by obtaining a mismatch may be: the shop recognition method disclosed by the application updates the image features of the plaque image of which the target plaque image is ' kendyl ' through the shop images of the kendyl and the mcdonald ' and updates the shop image features of the ' kendyl ' through the shop image features of other shops of the ' kendyl ' in the shop database, then performs matching recognition through the updated image features, and increases the ranking position of the ' kendyl ' through the shop image features of the other shops of the ' kendyl ', so that the accuracy of image matching can be effectively improved.
EXAMPLE III
As shown in fig. 3, the image-based shop recognition apparatus disclosed in this embodiment includes:
the feature acquisition module 310 is configured to acquire global image features of a target plaque image and acquire regional saliency features of the target plaque image;
a feature fusion module 320, configured to determine an image feature of the target plaque image according to the global image feature and the regional saliency feature;
and the matching identification module 330 is configured to determine a store matched with the target plaque image according to the image feature of the target plaque image and a store image feature of a preset store.
In some embodiments of the present application, as shown in fig. 4, the feature obtaining module 310 further includes:
the first feature obtaining sub-module 3101 is configured to obtain global image features of the target plaque image through a pre-trained neural network model.
The feature obtaining module 310 further includes:
a second feature obtaining sub-module 3102, configured to divide the target plaque image according to at least one division method, and determine a plurality of image regions;
the second feature obtaining sub-module 3102 is further configured to, for each image region, segment the image region according to color distribution, and determine at least one sub-image region included in the image region; and the number of the first and second groups,
and for each image area, determining the regional saliency characteristics of the image area according to the spatial distance and the color distance between the sub-image areas contained in the image area.
In some embodiments of the present application, the determining the regional saliency feature of the image region according to the spatial distance and the color distance between the sub-image regions included in the image region includes:
determining the significance of each sub-image area according to the space distance and the color distance between the sub-image areas contained in the image area;
and determining the regional saliency characteristics of the image regions to which the sub-image regions belong according to the saliency of the preset number of the sub-image regions with the highest saliency.
Optionally, the determining the saliency of each sub-image region according to the spatial distance and the color distance between the sub-image regions included in the image region includes:
by the formula
Figure BDA0001854563380000151
Determining each sub-image region r contained in the image region separatelykWherein r isiRepresents a difference from rkSub-image area of Ds(rk,ri) Representing a sub-image area rkAnd riSpatial distance of, σsRepresents the spatial distance weight, ω (r)i) R representing sub-image areaiWeight of (D)r(rk,ri) Is a sub-image region rkAnd riColor distance of (D)r(rk,ri) The calculation formula of (2) is as follows:
Figure BDA0001854563380000152
wherein, f (c)i,v) Representing the v-th color ci,vIn the ith sub-image region riAll of n in (1)iProbability of appearance in seed color, f (c)k,u) Representing the u-th color ck,uIn the k-th sub-image region rkAll of n in (1)kProbability of appearance in seed color, D (c)k,u,ci,v) Is a color ck,uAnd color ci,vThe distance measure of the colors in L a b space.
In some embodiments of the present application, as shown in fig. 5, the apparatus further comprises:
a feature correction module 340, configured to correct the image features of the target plaque image through a pre-trained K-means + + clustering model to retain representative features of the image features.
In some embodiments of the present application, as shown in fig. 5, the apparatus further comprises:
a reordering module 350, the reordering module to:
updating the image features of the target plaque image through a preset first number of the store image feature average values with the highest matching degree in stores matched with the target plaque image;
respectively determining a preset second number of shops with the highest matching degree in the shops matched with the target plaque image and similar shops in the preset shops;
for a preset second number of shops, updating the shop image characteristics of the shops through the average value of the shop image characteristics of the similar shops meeting preset conditions in the similar shops of the shops;
determining a reordering distance between the target plaque image and the preset shop according to the updated image characteristics of the target plaque image and the updated shop image characteristics of the preset shop;
and re-determining the shops matched with the target plaque image according to the re-sequencing distance.
Optionally, the determining a reordering distance between the target plaque image and the preset shop according to the updated image feature of the target plaque image and the updated shop image feature of the preset shop includes:
by the formula d*(p,gi)=(1-λ)dJ(p,gi)+λd(p,gi) Determining the plaque p corresponding to the target plaque image and the plaque g of the preset shopiA reordering distance therebetween; wherein d isJ(p,gi) Indicating board p and board giThe Jacobsad distance between the two is calculated by the formula
Figure BDA0001854563380000161
In the formula (I), the compound is shown in the specification,
Figure BDA0001854563380000162
image features as board p and board gjNeighbor coding distance vectors of corresponding store image features; d (p, g)i) Indicating board p and board giThe mahalanobis distance between the two is calculated by the formula
Figure BDA0001854563380000163
Figure BDA0001854563380000164
In the formula, xpImage features as plaque p and
Figure BDA0001854563380000165
is a tablet gjThe characteristics of the corresponding image of the shop,
Figure BDA0001854563380000166
an image feature covariance matrix; λ is a weight coefficient.
In some embodiments of the present application, as shown in fig. 5, the apparatus further comprises:
and the prescreening module 360 is used for determining the preset shop according to the appointed geographical position information.
In some embodiments of the present application, the feature fusion module 320 is further configured to:
and splicing the global image features and the regional saliency features to obtain a result, and taking the result as the image features of the target plaque image.
The image-based store identification device disclosed in the embodiment of the present application is used for implementing each step of the image-based store identification method described in the third and fourth embodiments of the present application, and specific implementation of each module of the device refers to the corresponding step, which is not described herein again.
The image-based store identification device disclosed by the embodiment of the application acquires the global image characteristics of the target plaque image and acquires the regional saliency characteristics of the target plaque image; determining the image characteristics of the target plaque image according to the global image characteristics and the regional saliency characteristics; and determining the shops matched with the target plaque image according to the image characteristics of the target plaque image and the shop image characteristics of the preset shops, and being beneficial to solving the problem of low accuracy of shop identification based on images. The image-based store recognition device disclosed by the embodiment of the application adopts the global image characteristics to reflect the appearance difference and the overall effect of the plaque, but ignores some distinguishing degrees but fine characteristics aiming at fine-grained recognition, and the regional saliency characteristics are finer-grained local characteristics, so that the defect of the global image characteristics can be overcome. Meanwhile, the image features are corrected through the K-means + + clustering model, representative characteristics in the image features can be reserved, redundant or unrepresentative characteristics are filtered out, and therefore storage efficiency of a shop database is improved, and image recognition efficiency is improved by reducing the size of the image features.
Further, the identification result obtained by only relying on the image features for preliminary matching is not good for the indistinguishable sample retrieval, for example, when the target plaque image is a "kendiry" plaque image, the result obtained by obtaining a mismatch may be: the shop recognition method disclosed by the application updates the image features of the plaque image of which the target plaque image is ' kendyl ' through the shop images of the kendyl and the mcdonald ' and updates the shop image features of the ' kendyl ' through the shop image features of other shops of the ' kendyl ' in the shop database, then performs matching recognition through the updated image features, and increases the ranking position of the ' kendyl ' through the shop image features of the other shops of the ' kendyl ', so that the accuracy of image matching can be effectively improved.
Correspondingly, the application also discloses an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the store identification method based on the image according to the first embodiment and the second embodiment of the application. The electronic device can be a PC, a mobile terminal, a personal digital assistant, a tablet computer and the like.
The present application also discloses a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image-based store identification method as described in the first and second embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The image-based shop identification method and device provided by the application are introduced in detail, and a specific example is applied in the description to explain the principle and the implementation of the application, and the description of the embodiment is only used for helping to understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods according to the various embodiments or some parts of the embodiments.

Claims (20)

1. An image-based store identification method is characterized by comprising the following steps:
acquiring global image characteristics of a target plaque image and acquiring regional saliency characteristics of the target plaque image;
determining the image characteristics of the target plaque image according to the global image characteristics and the regional saliency characteristics;
determining a shop matched with the target plaque image according to the image characteristics of the target plaque image and the shop image characteristics of a preset shop;
updating the image features of the target plaque image through the average value of the store image features of a preset first number of stores with the highest matching degree in the stores matched with the target plaque image;
respectively determining a preset second number of shops with the highest matching degree in the shops matched with the target plaque image and similar shops in the preset shops;
for a preset second number of shops, updating the shop image features of the preset shops through the average value of the shop image features of the similar shops meeting preset conditions in the similar shops;
determining a reordering distance between the target plaque image and the preset shop according to the updated image characteristics of the target plaque image and the updated shop image characteristics of the preset shop;
and re-determining the shops matched with the target plaque image according to the re-sequencing distance.
2. The method of claim 1, wherein the step of obtaining global image features of the target plaque image comprises:
and acquiring the global image characteristics of the target plaque image through a pre-trained neural network model.
3. The method of claim 1, wherein the step of obtaining the regional saliency characteristics of the target plaque image comprises:
dividing the target plaque image according to at least one division method to determine a plurality of image areas;
for each image area, segmenting the image area according to color distribution, and determining at least one sub-image area contained in the image area;
and for each image area, determining the regional saliency characteristics of the image area according to the spatial distance and the color distance between the sub-image areas contained in the image area.
4. The method according to claim 3, wherein the step of determining the regional saliency characteristics of the image regions by the spatial distance and the color distance between the sub-image regions included in the image regions comprises:
determining the significance of each sub-image area according to the space distance and the color distance between the sub-image areas contained in the image area;
and determining the regional saliency characteristics of the image regions to which the sub-image regions belong according to the saliency of the preset number of the sub-image regions with the highest saliency.
5. The method according to claim 4, wherein the step of determining the saliency of each sub-image region from the spatial distance and the color distance between the sub-image regions comprised by the image region comprises:
by the formula
Figure FDA0002858582650000021
Determining each sub-image region r contained in the image region separatelykWherein r isiRepresents a difference from rkSub-image area of Ds(rk,ri) Representing a sub-image area rkAnd riSpatial distance of, σsRepresents the spatial distance weight, ω (r)i) R representing sub-image areaiWeight of (D)r(rk,ri) Is a sub-image region rkAnd riColor distance of (D)r(rk,ri) The calculation formula of (2) is as follows:
Figure FDA0002858582650000022
wherein, f (c)i,v) Representing the v-th color ci,vIn the ith sub-image region riAll of n in (1)iProbability of appearance in seed color, f (c)k,u) Representing the u-th color ck,uIn the k-th sub-image region rkAll of n in (1)kProbability of appearance in seed color, D (c)k,u,ci,v) Is a color ck,uAnd color ci,vThe distance measure of the colors in L a b space.
6. The method of claim 1, wherein the step of determining a store matching the target plaque image based on image features of the target plaque image and store image features of a preset store is preceded by the step of:
modifying the image features of the target plaque image through a pre-trained K-means + + clustering model to retain representative ones of the image features.
7. The method of claim 1, wherein the step of determining a re-ordering distance of the target plaque image from the preset store based on the updated image characteristics of the target plaque image and the updated store image characteristics of the preset store comprises:
by the formula d*(p,gi)=(1-λ)dJ(p,gi)+λd(p,gi) Determining the plaque p corresponding to the target plaque image and the plaque g of the preset shopiA reordering distance therebetween; wherein d isJ(p,gi) Indicating board p and board giThe Jacobsad distance between the two is calculated by the formula
Figure FDA0002858582650000023
In the formula (I), the compound is shown in the specification,
Figure FDA0002858582650000024
image features as board p and board gjNeighbor coding distance vectors of corresponding store image features; d (p, g)i) Indicating board p and board giThe mahalanobis distance between the two is calculated by the formula
Figure FDA0002858582650000025
Figure FDA0002858582650000026
In the formula, xpIs an image characteristic of the plaque p,
Figure FDA0002858582650000027
is a tablet gjThe characteristics of the corresponding image of the shop,
Figure FDA0002858582650000028
an image feature covariance matrix; the lambda is a weight coefficient of the weight,
Figure FDA0002858582650000029
tablets g for pre-set shopsiCorresponding store image characteristics and plaque gjA neighbor encoding distance vector for the corresponding store image feature.
8. The method of any of claims 1 to 6, wherein the step of determining a store matching the target plaque image based on image features of the target plaque image and store image features of a pre-set store is preceded by the step of:
and determining the preset shop according to the appointed geographical position information.
9. The method of any one of claims 1 to 6, wherein the step of determining image features of the target plaque image from the global image features and the regional saliency features comprises:
and splicing the global image features and the regional saliency features to obtain a result, and taking the result as the image features of the target plaque image.
10. An image-based store identification device, comprising:
the characteristic acquisition module is used for acquiring the global image characteristic of the target plaque image and acquiring the regional saliency characteristic of the target plaque image;
the characteristic fusion module is used for determining the image characteristics of the target plaque image according to the global image characteristics and the regional saliency characteristics;
the matching identification module is used for determining a shop matched with the target plaque image according to the image characteristics of the target plaque image and the shop image characteristics of a preset shop;
the device further comprises: a reordering module to perform the following operations:
updating the image features of the target plaque image through the average value of the store image features of a preset first number of stores with the highest matching degree in the stores matched with the target plaque image;
respectively determining a preset second number of shops with the highest matching degree in the shops matched with the target plaque image and similar shops in the preset shops;
for a preset second number of shops, updating the shop image features of the preset shops through the average value of the shop image features of the similar shops meeting preset conditions in the similar shops;
determining a reordering distance between the target plaque image and the preset shop according to the updated image characteristics of the target plaque image and the updated shop image characteristics of the preset shop;
and re-determining the shops matched with the target plaque image according to the re-sequencing distance.
11. The apparatus of claim 10, wherein the feature obtaining module further comprises:
and the first feature acquisition submodule is used for acquiring the global image features of the target plaque image through a pre-trained neural network model.
12. The apparatus of claim 10, wherein the feature obtaining module further comprises:
the second characteristic acquisition sub-module is used for dividing the target plaque image according to at least one segmentation method and determining a plurality of image areas;
the second feature acquisition sub-module is further configured to, for each image region, segment the image region according to color distribution, and determine at least one sub-image region included in the image region; and the number of the first and second groups,
and for each image area, determining the regional saliency characteristics of the image area according to the spatial distance and the color distance between the sub-image areas contained in the image area.
13. The apparatus according to claim 12, wherein the determining the regional saliency characteristics of the image regions from the spatial distance and the color distance between the sub-image regions included in the image regions comprises:
determining the significance of each sub-image area according to the space distance and the color distance between the sub-image areas contained in the image area;
and determining the regional saliency characteristics of the image regions to which the sub-image regions belong according to the saliency of the preset number of the sub-image regions with the highest saliency.
14. The apparatus according to claim 13, wherein said determining the saliency of each of said sub-image regions from the spatial distance and the color distance between said sub-image regions comprises:
by the formula
Figure FDA0002858582650000041
Determining each sub-image region r contained in the image region separatelykWherein r isiRepresents a difference from rkSub-image area of Ds(rk,ri) Representing a sub-image area rkAnd riSpatial distance of, σsRepresents the spatial distance weight, ω (r)i) R representing sub-image areaiWeight of (D)r(rk,ri) Is a sub-image region rkAnd riColor distance of (D)r(rk,ri) The calculation formula of (2) is as follows:
Figure FDA0002858582650000042
wherein, f (c)i,v) Representing the v-th color ci,vIn the ith sub-image region riAll of n in (1)iProbability of appearance in seed color, f (c)k,u) Representing the u-th color ck,uIn the k-th sub-image region rkAll of n in (1)kProbability of appearance in seed color, D (c)k,u,ci,v) Is a color ck,uAnd color ci,vThe distance measure of the colors in L a b space.
15. The apparatus of claim 10, further comprising:
and the characteristic correction module is used for correcting the image characteristics of the target plaque image through a pre-trained K-means + + clustering model so as to reserve representative characteristics in the image characteristics.
16. The apparatus of claim 10, wherein determining the re-ordering distance of the target plaque image from the preset store based on the updated image characteristics of the target plaque image and the updated store image characteristics of the preset store comprises:
by the formula d*(p,gi)=(1-λ)dJ(p,gi)+λd(p,gi) Determining the plaque p corresponding to the target plaque image and the plaque g of the preset shopiA reordering distance therebetween; wherein d isJ(p,gi) Indicating board p and board giThe Jacobsad distance between the two is calculated by the formula
Figure FDA0002858582650000051
In the formula (I), the compound is shown in the specification,
Figure FDA0002858582650000052
image features as board p and board gjNeighbor coding distance vectors of corresponding store image features; d (p, g)i) Indicating board p and board giThe mahalanobis distance between the two is calculated by the formula
Figure FDA0002858582650000053
Figure FDA0002858582650000054
In the formula, xpIs an image characteristic of the plaque p,
Figure FDA0002858582650000055
is a tablet gjThe characteristics of the corresponding image of the shop,
Figure FDA0002858582650000056
an image feature covariance matrix; the lambda is a weight coefficient of the weight,
Figure FDA0002858582650000057
tablets g for pre-set shopsiCorresponding store image characteristics and plaque gjA neighbor encoding distance vector for the corresponding store image feature.
17. The apparatus of any one of claims 10 to 14, further comprising:
and the prescreening module is used for determining a preset shop according to the appointed geographical position information.
18. The apparatus of any of claims 10 to 14, wherein the feature fusion module is further configured to:
and splicing the global image features and the regional saliency features to obtain a result, and taking the result as the image features of the target plaque image.
19. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the image-based store identification method of any one of claims 1 to 9 when executing the computer program.
20. A computer-readable storage medium on which a computer program is stored, characterized in that the program, when being executed by a processor, carries out the steps of the image-based shop recognition method of any one of claims 1 to 9.
CN201811309281.3A 2018-11-05 2018-11-05 Shop identification method and device based on image and electronic equipment Active CN109711399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811309281.3A CN109711399B (en) 2018-11-05 2018-11-05 Shop identification method and device based on image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811309281.3A CN109711399B (en) 2018-11-05 2018-11-05 Shop identification method and device based on image and electronic equipment

Publications (2)

Publication Number Publication Date
CN109711399A CN109711399A (en) 2019-05-03
CN109711399B true CN109711399B (en) 2021-04-27

Family

ID=66254284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811309281.3A Active CN109711399B (en) 2018-11-05 2018-11-05 Shop identification method and device based on image and electronic equipment

Country Status (1)

Country Link
CN (1) CN109711399B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110175655B (en) * 2019-06-03 2020-12-25 中国科学技术大学 Data identification method and device, storage medium and electronic equipment
CN110223050A (en) * 2019-06-24 2019-09-10 广东工业大学 A kind of verification method and relevant apparatus of merchant store fronts title
CN110335270B (en) * 2019-07-09 2022-09-13 华北电力大学(保定) Power transmission line defect detection method based on hierarchical regional feature fusion learning
CN110796664B (en) * 2019-10-14 2023-05-23 北京字节跳动网络技术有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN112784086A (en) * 2021-01-28 2021-05-11 北京有竹居网络技术有限公司 Picture screening method and device, storage medium and electronic equipment
CN113065559B (en) * 2021-06-03 2021-08-27 城云科技(中国)有限公司 Image comparison method and device, electronic equipment and storage medium
CN114169930B (en) * 2021-12-07 2022-12-13 钻技(上海)信息科技有限公司 Online and offline cooperative store accurate marketing method and system
TWI832642B (en) * 2022-12-28 2024-02-11 國立中央大學 Image processing method for robust signboard detection and recognition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101369316A (en) * 2008-07-09 2009-02-18 东华大学 Image characteristics extraction method based on global and local structure amalgamation
CN102129693A (en) * 2011-03-15 2011-07-20 清华大学 Image vision significance calculation method based on color histogram and global contrast
CN106203746A (en) * 2016-09-30 2016-12-07 携程计算机技术(上海)有限公司 Hotel group divides and the method for requirement forecasting
CN107122701A (en) * 2017-03-03 2017-09-01 华南理工大学 A kind of traffic route sign based on saliency and deep learning
CN107451156A (en) * 2016-05-31 2017-12-08 杭州华为企业通信技术有限公司 A kind of image recognition methods and identification device again
CN108280469A (en) * 2018-01-16 2018-07-13 佛山市顺德区中山大学研究院 A kind of supermarket's commodity image recognition methods based on rarefaction representation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10354159B2 (en) * 2016-09-06 2019-07-16 Carnegie Mellon University Methods and software for detecting objects in an image using a contextual multiscale fast region-based convolutional neural network
KR101892740B1 (en) * 2016-10-11 2018-08-28 한국전자통신연구원 Method for generating integral image marker and system for executing the method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101369316A (en) * 2008-07-09 2009-02-18 东华大学 Image characteristics extraction method based on global and local structure amalgamation
CN102129693A (en) * 2011-03-15 2011-07-20 清华大学 Image vision significance calculation method based on color histogram and global contrast
CN107451156A (en) * 2016-05-31 2017-12-08 杭州华为企业通信技术有限公司 A kind of image recognition methods and identification device again
CN106203746A (en) * 2016-09-30 2016-12-07 携程计算机技术(上海)有限公司 Hotel group divides and the method for requirement forecasting
CN107122701A (en) * 2017-03-03 2017-09-01 华南理工大学 A kind of traffic route sign based on saliency and deep learning
CN108280469A (en) * 2018-01-16 2018-07-13 佛山市顺德区中山大学研究院 A kind of supermarket's commodity image recognition methods based on rarefaction representation

Also Published As

Publication number Publication date
CN109711399A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN109711399B (en) Shop identification method and device based on image and electronic equipment
CN110163640B (en) Method for implanting advertisement in video and computer equipment
CN108229468B (en) Vehicle appearance feature recognition and vehicle retrieval method and device, storage medium and electronic equipment
CN109977262B (en) Method and device for acquiring candidate segments from video and processing equipment
US9042648B2 (en) Salient object segmentation
CN105493078B (en) Colored sketches picture search
CN109961051A (en) A kind of pedestrian's recognition methods again extracted based on cluster and blocking characteristic
CN109409994A (en) The methods, devices and systems of analog subscriber garments worn ornaments
WO2017181892A1 (en) Foreground segmentation method and device
CN110111338A (en) A kind of visual tracking method based on the segmentation of super-pixel time and space significance
CN112967341B (en) Indoor visual positioning method, system, equipment and storage medium based on live-action image
Lee et al. Photographic composition classification and dominant geometric element detection for outdoor scenes
CN108509925B (en) Pedestrian re-identification method based on visual bag-of-words model
CN106407978B (en) Method for detecting salient object in unconstrained video by combining similarity degree
CN108198172B (en) Image significance detection method and device
CN107977948B (en) Salient map fusion method facing community image
CN109636809B (en) Image segmentation level selection method based on scale perception
Tatzgern Situated visualization in augmented reality
CN103995864B (en) A kind of image search method and device
Fond et al. Facade proposals for urban augmented reality
CN110956213A (en) Method and device for generating remote sensing image feature library and method and device for retrieving remote sensing image
CN110222772B (en) Medical image annotation recommendation method based on block-level active learning
CN114743139A (en) Video scene retrieval method and device, electronic equipment and readable storage medium
CN110196917A (en) Personalized LOGO format method for customizing, system and storage medium
CN113139540B (en) Backboard detection method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant