CN112200783B - Commodity image processing method and device - Google Patents

Commodity image processing method and device Download PDF

Info

Publication number
CN112200783B
CN112200783B CN202011059920.2A CN202011059920A CN112200783B CN 112200783 B CN112200783 B CN 112200783B CN 202011059920 A CN202011059920 A CN 202011059920A CN 112200783 B CN112200783 B CN 112200783B
Authority
CN
China
Prior art keywords
image
sub
commodity
images
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011059920.2A
Other languages
Chinese (zh)
Other versions
CN112200783A (en
Inventor
隋心
张静军
姜琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN202011059920.2A priority Critical patent/CN112200783B/en
Publication of CN112200783A publication Critical patent/CN112200783A/en
Application granted granted Critical
Publication of CN112200783B publication Critical patent/CN112200783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The embodiment of the application provides a commodity image processing method, which comprises the following steps: and acquiring a first commodity image and a second commodity image, wherein the image sizes of the first commodity image and the second commodity image are the same. Dividing a first commodity image into at least two sub-images, and determining a first similarity of the first commodity image and the second commodity image by utilizing the at least two sub-images. In the present application, the weight of the sub-image located in the center area of the first commodity image is higher than the weight of the sub-image located in the edge area of the first commodity image. In this way, when the first similarity is calculated, the sub-image located in the central area of the first commodity image can play more roles, and the influence of the sub-image located in the edge area of the first commodity image on the first similarity is correspondingly weakened, so that the similarity between the first commodity image and the second commodity image is accurately determined.

Description

Commodity image processing method and device
Technical Field
The present application relates to the field of image processing, and in particular, to a method and an apparatus for processing commodity images.
Background
With the development of the network mall, a user can purchase goods in the network mall. The user can know the relevant information of the commodity by browsing the commodity image in the network mall.
In some scenarios, such as similar or identical merchandise recommendation scenarios, it may be desirable to determine the degree of similarity between merchandise images. However, the current image processing method cannot accurately determine the similarity between commodity images.
Therefore, a scheme is urgently needed to accurately determine the similarity between commodity images.
Disclosure of Invention
The application aims to solve the technical problem of accurately determining the similarity between commodity images and provides a commodity image processing method and device.
In a first aspect, an embodiment of the present application provides a method for processing a commodity image, where the method includes:
Acquiring a first commodity image and a second commodity image, wherein the image sizes of the first commodity image and the second commodity image are the same;
dividing the first merchandise image into at least two sub-images;
for each sub-image in the at least two sub-images, respectively determining an image area matched with each sub-image in the second commodity image;
Acquiring the similarity of each sub-image and the image area matched with each sub-image in the determined second commodity image;
Determining a first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image and the weight of each sub-image; wherein:
The weight of the sub-image located in the center area of the first commodity image is higher than the weight of the sub-image located in the edge area of the first commodity image.
In one implementation, the dividing the first merchandise image into at least two sub-images includes:
Dividing the first commodity image into a x b sub-images according to a x b mode, wherein a and b are natural numbers, a x b is a natural number larger than 2, and a and b are not equal to 2 at the same time.
In one implementation, the at least two sub-images include a first sub-image, and the determining an image area in the second merchandise image that matches the first sub-image includes:
And determining an image area with the highest matching degree with the first sub-image in the second commodity image as an image area matched with the first sub-image.
In one implementation, the method further comprises:
respectively calculating the overlapping degree of each sub-image and the determined image area matched with each sub-image;
The determining the first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image and the weight of each sub-image comprises the following steps:
And determining the first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image, the overlapping degree of the image area matched with each sub-image in the second commodity image and the determined sub-image, and the weight of each sub-image.
In one implementation, the method further comprises:
Dividing the first commodity image into c x d sub-images according to a mode of c x d, wherein c x d is a natural number greater than 1, and c and d are both natural numbers; a is not equal to c and/or b is not equal to d;
For each sub-image in the c×d sub-images, respectively determining an image area matched with each sub-image in the second commodity image;
Acquiring similarity of each of the c×d sub-images and the determined image region in the second commodity image, which is matched with each of the c×d sub-images;
Determining a second similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each of the c x d sub-images in the c x d sub-images and the determined second commodity image in the c x d sub-images and the weight of each of the c x d sub-images;
And determining a third similarity of the first commodity image and the second commodity image according to the first similarity and the second similarity.
In one implementation, if both c and d are 2, or c×d is equal to 2, then the weights of each of the c×d sub-images are the same; if a is greater than 2 and c and d are not equal to 2 at the same time, the weight of the sub-image located in the central area of the first commodity image in the c×d sub-images is higher than the weight of the sub-image located in the edge area of the first commodity image in the c×d sub-images.
In one implementation, the method further comprises:
And when the first similarity of the first commodity image and the second commodity image is higher than a first threshold value, determining that the commodity included in the first commodity image and the commodity included in the second commodity image are the same or similar commodity.
In a second aspect, an embodiment of the present application provides a processing apparatus for a commodity image, the apparatus including:
A first acquisition unit configured to acquire a first commodity image and a second commodity image, the first commodity image and the second commodity image having the same image size;
A first dividing unit configured to divide the first commodity image into at least two sub-images;
a first determining unit, configured to determine, for each of the at least two sub-images, an image area in the second commodity image that matches with the each sub-image;
A second obtaining unit, configured to obtain a similarity between the each sub-image and the determined image area in the second commodity image, where the similarity matches with the each sub-image;
A second determining unit configured to determine a first similarity between the first commodity image and the second commodity image according to a similarity between the each sub-image and the determined image region of the second commodity image that matches the each sub-image, and a weight of the each sub-image; wherein:
The weight of the sub-image located in the center area of the first commodity image is higher than the weight of the sub-image located in the edge area of the first commodity image.
In one implementation, the first dividing unit is configured to:
Dividing the first commodity image into a x b sub-images according to a x b mode, wherein a and b are natural numbers, a x b is a natural number larger than 2, and a and b are not equal to 2 at the same time.
In one implementation, the at least two sub-images include a first sub-image, and the first determining unit is configured to:
And determining an image area with the highest matching degree with the first sub-image in the second commodity image as an image area matched with the first sub-image.
In one implementation, the apparatus further comprises:
a first calculating unit configured to calculate a degree of overlap between each of the sub-images and the determined image region matching each of the sub-images, respectively;
the second determining unit is configured to:
And determining the first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image, the overlapping degree of the image area matched with each sub-image in the second commodity image and the determined sub-image, and the weight of each sub-image.
In one implementation, the apparatus further comprises:
The second dividing unit is used for dividing the first commodity image into c x d sub-images according to a mode of c x d, wherein c x d is a natural number larger than 1, and c and d are both natural numbers; a is not equal to c and/or b is not equal to d;
A third determining unit, configured to determine, for each of the c×d sub-images, an image area in the second commodity image that matches with the each sub-image;
A third obtaining unit, configured to obtain a similarity between each of the c×d sub-images and the determined image area in the second commodity image, where the similarity matches each of the c×d sub-images;
A fourth determining unit configured to determine a second similarity of the first commodity image and the second commodity image according to a similarity of the image area in the c×d sub-images and the determined each sub-image in the second commodity image, the similarity being matched with each sub-image in the c×d sub-images, and a weight of each sub-image in the c×d sub-images;
And a fifth determining unit configured to determine a third similarity between the first commodity image and the second commodity image according to the first similarity and the second similarity.
In one implementation, if both c and d are 2, or c×d is equal to 2, then the weights of each of the c×d sub-images are the same; if a is greater than 2 and c and d are not equal to 2 at the same time, the weight of the sub-image located in the central area of the first commodity image in the c×d sub-images is higher than the weight of the sub-image located in the edge area of the first commodity image in the c×d sub-images.
In one implementation, the apparatus further comprises:
a sixth determining unit configured to determine that a commodity included in the first commodity image and a commodity included in the second commodity image are the same or similar commodity when a first similarity of the first commodity image and the second commodity image is higher than a first threshold.
In a third aspect, an embodiment of the present application provides a processing apparatus for commodity images, including a memory, and one or more programs, wherein the one or more programs are stored in the memory, and configured to be executed by one or more processors, the one or more programs including instructions for:
Acquiring a first commodity image and a second commodity image, wherein the image sizes of the first commodity image and the second commodity image are the same;
dividing the first merchandise image into at least two sub-images;
for each sub-image in the at least two sub-images, respectively determining an image area matched with each sub-image in the second commodity image;
Acquiring the similarity of each sub-image and the image area matched with each sub-image in the determined second commodity image;
Determining a first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image and the weight of each sub-image; wherein:
The weight of the sub-image located in the center area of the first commodity image is higher than the weight of the sub-image located in the edge area of the first commodity image.
In one implementation, the dividing the first merchandise image into at least two sub-images includes:
Dividing the first commodity image into a x b sub-images according to a x b mode, wherein a and b are natural numbers, a x b is a natural number larger than 2, and a and b are not equal to 2 at the same time.
In one implementation, the at least two sub-images include a first sub-image, and the determining an image area in the second merchandise image that matches the first sub-image includes:
And determining an image area with the highest matching degree with the first sub-image in the second commodity image as an image area matched with the first sub-image.
In one implementation, the operations further comprise:
respectively calculating the overlapping degree of each sub-image and the determined image area matched with each sub-image;
The determining the first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image and the weight of each sub-image comprises the following steps:
And determining the first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image, the overlapping degree of the image area matched with each sub-image in the second commodity image and the determined sub-image, and the weight of each sub-image.
In one implementation, the operations further comprise:
Dividing the first commodity image into c x d sub-images according to a mode of c x d, wherein c x d is a natural number greater than 1, and c and d are both natural numbers; a is not equal to c and/or b is not equal to d;
For each sub-image in the c×d sub-images, respectively determining an image area matched with each sub-image in the second commodity image;
Acquiring similarity of each of the c×d sub-images and the determined image region in the second commodity image, which is matched with each of the c×d sub-images;
Determining a second similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each of the c x d sub-images in the c x d sub-images and the determined second commodity image in the c x d sub-images and the weight of each of the c x d sub-images;
And determining a third similarity of the first commodity image and the second commodity image according to the first similarity and the second similarity.
In one implementation, if both c and d are 2, or c×d is equal to 2, then the weights of each of the c×d sub-images are the same; if a is greater than 2 and c and d are not equal to 2 at the same time, the weight of the sub-image located in the central area of the first commodity image in the c×d sub-images is higher than the weight of the sub-image located in the edge area of the first commodity image in the c×d sub-images.
In one implementation, the operations further comprise:
And when the first similarity of the first commodity image and the second commodity image is higher than a first threshold value, determining that the commodity included in the first commodity image and the commodity included in the second commodity image are the same or similar commodity.
In a fourth aspect, embodiments of the present application provide a computer-readable medium having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform the method of any of the first aspects above.
Compared with the prior art, the embodiment of the application has the following advantages:
The embodiment of the application provides a commodity image processing method which can accurately determine the similarity between a first commodity image and a second commodity image. Specifically, a first commodity image and a second commodity image, which are identical in image size, may be acquired. After the first commodity image is acquired, the first commodity image may be divided into at least two sub-images, each sub-image of the at least two sub-images may be respectively matched with the second commodity image after the first commodity image is divided into at least two sub-images, an image area matched with each sub-image in the second commodity image is determined, and the similarity between each sub-image and the determined image area matched with each sub-image is acquired. And then, determining the similarity of the first commodity image and the second commodity image according to the similarity of each sub-image and the determined image area matched with each sub-image. In the present application, when determining the similarity of the first commodity image and the second commodity image according to the similarity of the each sub-image and the determined image area matching the each sub-image, a corresponding weight may be given to the each sub-image, and the first similarity of the first commodity image and the second commodity image may be determined according to the similarity of the each sub-image and the determined image area matching the each sub-image, and the weight of the each sub-image. Considering that for merchandise images, the merchandise will typically be located in the center of the entire image, for the edge portions it may be a background image or a blank area. Therefore, in order to improve accuracy in determining the similarity of the first commodity image and the second commodity image, in the present application, the weight of the sub-image located in the center area of the first commodity image is higher than the weight of the sub-image located in the edge area of the first commodity image. In this way, the similarity between the sub-image located in the central area of the first commodity image and the image area matching the sub-image can play a more role in calculating the first similarity, and the influence of the similarity between the sub-image located in the edge area of the first commodity image and the image area matching the sub-image on the first similarity is correspondingly weakened. Therefore, by using the scheme, the similarity between the first commodity image and the second commodity image can be accurately determined.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to the drawings without inventive effort to those skilled in the art.
Fig. 1 is a flow chart of a commodity image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a first merchandise image according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for processing a commodity image according to another embodiment of the present application;
Fig. 4 is a schematic structural diagram of a commodity image processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a client according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In order to make the present application better understood by those skilled in the art, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The inventor of the present application has found that a user can know related information of a commodity by browsing a commodity image in a network mall. In some scenarios, such as similar or identical merchandise recommendation scenarios, the user may be recommended with respect to the same or similar merchandise in combination with the similarity between merchandise images. For example, when the similarity between the two commodity images is greater than a preset threshold, it is determined that the commodities corresponding to the two commodity images are the same or similar commodities.
The inventor of the present application also found through research that the similarity between the commodity images cannot be accurately determined by adopting the current image processing method, which further results in that whether the commodities corresponding to the two commodity images are the same or similar cannot be accurately determined. Because the similarity between commodity images is determined by the current image processing method, there is often no difference between each pixel point in the whole commodity image. For example, the semantic relationship of pixel points in the whole commodity image is utilized to perform segmentation solution, so that the similarity between two commodity images is determined. But the merchandise image has a certain specificity compared to other types of images such as scenery images. For the commodity image, the commodity is generally located in the central area of the commodity image, so that a user can intuitively know the appearance information of the commodity according to the commodity image, and the edge area of the commodity image is generally a blank area or an image background. Therefore, if the pixel points of each region in the whole commodity image are to be treated indiscriminately, the blank region or the background region can affect the accuracy of the similarity result. For example, for two merchandise images that correspond to different merchandise but have the same image layout, the determined image similarity may be high using current image processing methods.
In order to solve the above problems, embodiments of the present application provide a method for processing commodity images, which can accurately determine the similarity between two commodity images.
Various non-limiting embodiments of the present application are described in detail below with reference to the attached drawing figures.
Exemplary method
Referring to fig. 1, the flow chart of a commodity image processing method according to an embodiment of the present application is shown.
The method for processing the commodity image provided by the embodiment of the present application may be executed by a controller or a processor having a data processing function, or may be executed by a device including the controller or the processor, and the embodiment of the present application is not particularly limited. Wherein the device comprising the controller or processor includes, but is not limited to, a terminal device and a server.
In the present embodiment, the processing method of the commodity image shown in fig. 1 may include, for example, the following steps S101 to S105.
S101: and acquiring a first commodity image and a second commodity image, wherein the first commodity image and the second commodity image have the same image size.
The first commodity image and the second commodity image in the embodiment of the application are both images comprising commodities. In determining the similarity of two commodity images, it is considered that if the two images are different in size, accuracy of the calculation result may be affected when the similarity calculation is performed on the two images. Therefore, in the embodiment of the present application, the image size of the first commodity image and the second commodity image are the same.
It should be noted that, if the original image sizes of the acquired first commodity image and the second commodity image are different, before executing step S101 of the present application, the original image of the first commodity image and/or the second commodity image needs to be processed in advance, so that the image sizes of the processed first commodity image and second commodity image are the same.
In one example of an embodiment of the present application, the first merchandise image and the second merchandise image may be merchandise images provided by merchants in an online marketplace. In yet another example, the first merchandise image or the second merchandise image may be obtained after sizing merchandise images provided by merchants in the online mall. For example, if the image size of the commodity image provided by the first merchant is different from the image size of the commodity image provided by the second merchant, in one example, the first commodity image provided by the first merchant may be acquired, the third commodity image provided by the second merchant may be acquired, and the third commodity image may be subjected to size processing to obtain the second commodity image having the same image size as the first commodity image. In yet another example, a second merchandise image provided by a first merchant may be acquired, a fourth merchandise image provided by the second merchant may be acquired, and the fourth merchandise image may be sized to obtain a first merchandise image having the same image size as the second merchandise image.
S102: the first merchandise image is divided into at least two sub-images.
In the embodiment of the present application, the first commodity image may be divided into at least two sub-images in an average division manner, or may be divided into at least two sub-images in a non-uniform division manner, which is not particularly limited. The at least two divided sub-images comprise sub-images positioned in the central area of the first commodity image and sub-images positioned in the edge area of the first commodity image.
In one example, the first commodity image may be divided into a×b sub-images in an average division manner, where a and b are natural numbers, a×b is a natural number greater than 2, and a and b are not equal to 2 at the same time. In the following embodiments, the schemes provided in the embodiments of the present application will be described by taking the division of the first commodity image into a×b sub-images as an example, unless otherwise specified.
In the embodiment of the application, considering that for the commodity image, the commodity is generally located at the center of the whole image, for the edge portion, it may be a background image or a blank area. Therefore, in order to improve accuracy of determining the similarity between the first commodity image and the second commodity image, in the present application, the first commodity image may be divided in a manner of a×b, and a×b sub-images may be obtained by dividing. And when the accuracy of the first commodity image and the second commodity image is calculated, the a sub-images are treated differently according to the relative positions of the a sub-images in the first commodity image, so that the sub-images positioned in the central area of the first commodity image in the a sub-images can play more roles when the accuracy of the first commodity image and the second commodity image is determined. Accordingly, the sub-image located in the edge area of the first commodity image in the sub-images a×b is weakened, and the sub-images play a role in determining the accuracy of the first commodity image and the second commodity image.
In the embodiment of the present application, dividing the first commodity image according to a×b means dividing the height a of the first commodity image equally and dividing the width b of the first commodity image equally, so as to obtain a×b sub-images. In the embodiment of the present application, a and b may be equal or different, and the embodiment of the present application is not limited.
In the embodiment of the present application, considering that if a×b is equal to 2, for example, a is equal to 1 and b is equal to 2, or a is equal to 2 and b is equal to 1, the relative positions of the two sub-images obtained by dividing are the same in the first commodity image, and there is no division between the central area and the edge area, so in the present application, a×b is greater than 2. In addition, if a is equal to 2 and b is equal to 2, the relative positions of the four sub-images obtained by division in the first commodity image are the same, and there is no division of the center area and the edge area, and therefore, in the present application, a and b are not equal to 2 at the same time.
S103: and respectively determining an image area matched with each sub-image in the second commodity image aiming at each sub-image in the at least two sub-images.
S104: and obtaining the similarity of the image areas matched with each sub-image in the sub-images and the determined second commodity image.
In the embodiment of the present application, after the first commodity image is divided according to a×b, an image area matched with each sub-image in the second commodity image may be determined for each sub-image in the a×b sub-images, and the similarity between each sub-image and the determined image area matched with each sub-image may be further obtained.
In a specific implementation, S103 may, for example, match each sub-image with the second commodity image by using a classical template matching method, so as to obtain an image area in the second commodity image, where the image area is matched with each sub-image. For convenience of description, any one of the a×b sub-images is referred to as a first sub-image. In the embodiment of the application, the image area matched with the first sub-image in the second commodity image refers to an image area with higher image similarity with the first sub-image. In one example, it is considered that in practical application, there may be more than one image area in the second merchandise image, which has a relatively high degree of matching with the first sub-image, for a certain sub-image, such as the first sub-image. In one implementation manner of the embodiment of the present application, an image area with the highest matching degree with the first sub-image in the second commodity image may be determined as an image area matching with the first sub-image. The template matching method is not described in detail herein.
In S104, it should be noted that, in an example, when determining an image area matching the first sub-image in the second commodity image, as described above, an image area having the highest matching degree with the first sub-image in the second commodity image may be determined as the image area matching the first sub-image. In this way, after S103 is performed, the similarity between the first sub-image and the image area matching the first sub-image can be directly acquired.
In yet another example, a histogram matching, matrix decomposition, or the like algorithm may also be utilized to calculate the similarity of the first sub-image and the image region that matches the first sub-image. The algorithms for histogram matching and matrix decomposition are not described in detail here.
S105: determining a first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in each sub-image and the determined second commodity image and the weight of each sub-image; wherein: the weight of the sub-image located in the center area of the first commodity image is higher than the weight of the sub-image located in the edge area of the first commodity image.
After the step S104 of obtaining the similarity between the sub-images and the determined image area matching the sub-images, the similarity between the first commodity image and the second commodity image may be determined according to the similarity between the sub-images and the determined image area matching the sub-images. In the present application, when determining the similarity of the first commodity image and the second commodity image according to the similarity of the each sub-image and the determined image area matching the each sub-image, a corresponding weight may be given to the each sub-image, and the first similarity of the first commodity image and the second commodity image may be determined according to the similarity of the each sub-image and the determined image area matching the each sub-image, and the weight of the each sub-image.
In the present application, the weight of the sub-image located in the central region of the first commodity image in the a×b sub-images is higher than the weight of the sub-image located in the edge region of the first commodity image in the a×b sub-images. In the present application, the term "center region" and the term "edge region" may be considered as two opposite concepts. For both sub-images, if sub-image 1 is closer to the center of the first merchandise image than sub-image 2, then sub-image 1 may be considered to be located in the center region of the first merchandise image relative to sub-image 2, and correspondingly sub-image 2 may be located in the edge region of the first merchandise image relative to sub-image 1.
In one example, the first similarity may be calculated using the following equation (1).
In formula (1):
Gamma is a first similarity;
i is a subscript of a sub-image in the first commodity image, and the value of i can be from 1 to a;
s i is the similarity between the ith sub-image in the first commodity image and the image area matched with the ith sub-image in the second commodity image;
alpha i is the weight of the ith sub-image in the first merchandise image.
Regarding the α i, the first merchandise image will now be described by way of example with respect to division in 5*5.
Referring to fig. 2, fig. 2 is a schematic diagram of a first merchandise image according to an embodiment of the present application. As shown in fig. 2, in the present application, the first merchandise image is divided into 25 sub-images in a manner of 5*5. In the present application, the weight of the sub-image located in the most central region of the first commodity image (i.e., the sub-image numbered 1 in fig. 2) may be set to 0.9, the weight of the 8 sub-images may be set to 0.7 for the sub-image located in the sub-central region of the first commodity image (i.e., the sub-images numbered 2 to 9 in fig. 2), and the weight of the 16 sub-images may be set to 0.4 for the sub-image located in the edge region of the first commodity image (i.e., the sub-images numbered 10 to 25 in fig. 2).
Because the weight of the sub-image located in the central area of the first commodity image in the sub-image a×b is higher than the weight of the sub-image located in the edge area of the first commodity image in the sub-image a×b, when the first similarity is calculated, the similarity between the sub-image located in the central area of the first commodity image and the image area matched with the sub-image can play a more role, and the influence of the similarity between the sub-image located in the edge area of the first commodity image and the image area matched with the sub-image is correspondingly weakened on the first similarity, so that the accuracy of the first similarity is effectively improved.
In an example of the embodiment of the present application, after the first similarity is calculated, it may further be determined that the commodity included in the first commodity image and the commodity included in the second commodity image are the same or similar commodity according to the first similarity. In one implementation, if the first similarity is higher than a first threshold, the commodity included in the first commodity image and the commodity included in the second commodity image may be considered to be the same or similar commodity. The first threshold mentioned herein may be set according to practical situations, and embodiments of the present application are not specifically limited.
In addition, the weight of the sub-image located in the central area of the first commodity image is higher than that of the sub-image located in the edge area of the first commodity image in the sub-images a. Thus:
In one example, if the similarity between the sub-image located in the center area of the first commodity image and the image area matching the sub-image in the second commodity image is higher, the similarity between the sub-image located in the edge area of the first commodity image and the image area matching the sub-image in the second commodity image is lower. Since the weight of the sub-image located in the edge region of the first commodity image is low, the finally obtained first similarity is also relatively high, for example, higher than the first threshold. In the case of the commodity image, the similarity between the sub-image located in the edge region of the first commodity image and the image region matching the sub-image in the second commodity image is low, which may be caused by the difference in the background images used in the first commodity image and the second commodity image. Therefore, by using the scheme, the commodity contained in the first commodity image and the commodity contained in the second commodity image can be accurately determined to be the same or similar commodity under the condition that the first commodity image and the second commodity image use different backgrounds.
In yet another example, if the similarity between the sub-image located in the center region of the first merchandise image and the image region of the second merchandise image that matches the sub-image is relatively low, the similarity between the sub-image located in the edge region of the first merchandise image and the image region of the second merchandise image that matches the sub-image is relatively high. However, since the weight of the sub-image located in the center area of the first commodity image is higher and the weight of the sub-image located in the edge area of the first commodity image is lower, the finally obtained first similarity is still relatively low, for example, lower than the first threshold. In the commodity image, the similarity between the sub-image located in the edge area of the first commodity image and the image area matching with the sub-image in the second commodity image is high, which is probably caused by the fact that the background images used by the first commodity image and the second commodity image are identical. Therefore, with the present solution, it is possible to accurately determine that the commodity included in the first commodity image and the commodity included in the second commodity image are not the same or similar commodity in the case where the first commodity image and the second commodity image use the same background.
In some embodiments, as previously described, the edges of the merchandise image are mostly background images or blank areas. Therefore, when determining an image area matching the first sub-image in the second commodity image, the determined image area may not be very accurate. For example, the image area in the upper right corner of the second commodity image matches the sub-image in the upper left corner of the first commodity image, and both are blank areas, with a high degree of similarity therebetween. For this case, the high degree of similarity affects the accuracy of the first degree of similarity.
In order to avoid the above-described problems, the accuracy of the calculated similarity of the first commodity image and the second commodity image is further improved. In an implementation manner of the embodiment of the present application, the overlapping degree (Intersection over Union, IOU) of each sub-image and the determined image region matching each sub-image in the second commodity image may be further calculated, where the overlapping degree of the first sub-image and the image region matching the first sub-image in the second commodity image may be calculated by using the following formula (2).
In formula (2):
A 1 is a first sub-image in the first merchandise image;
B 1 is an image area matched with the first sub-image in the second commodity image;
an area of intersection of the image areas in the first sub-image and the second merchandise image that matches the first sub-image;
is the area of the union of the image areas in the first sub-image and the second merchandise image that match the first sub-image.
As can be seen from the formula (2), if the position of the image region in the second commodity image, which is matched with the first sub-image, is closer to the position of the first sub-image in the first commodity image, the IOU is larger, and otherwise, the IOU is smaller. The IOU is larger than or equal to 0 and smaller than or equal to 1. For example, if the first sub-image is the upper left sub-image in the first merchandise image and the image region in the second merchandise image matching the first sub-image is located at the upper right corner of the second merchandise image, the value of the IOU is 0, because the numerator of formula (2) is 0. For another example, if the first sub-image is the upper left sub-image in the first commodity image, and the image region in the second commodity image that matches the first sub-image is located at the upper left corner of the second commodity image, the value of the IOU is 1, because the numerator and denominator of the formula (2) are equal at this time.
After calculating the overlapping degree of the each sub-image and the determined image area matching the each sub-image, the first similarity may be calculated in combination with the overlapping degree of the each sub-image and the determined image area matching the each sub-image.
In one example, the first similarity may be calculated using the following equation (3).
In formula (3):
Gamma is a first similarity;
i is a subscript of a sub-image in the first commodity image, and the value of i can be from 1 to a;
s i is the similarity between the ith sub-image in the first commodity image and the image area matched with the ith sub-image in the second commodity image;
alpha i is the weight of the ith sub-image in the first commodity image;
IOU i is the overlap of the ith sub-image in the first merchandise image and the image region in the second merchandise image that matches the ith sub-image.
It will be appreciated that for the ith sub-image, the larger the IOU i, the greater the effect s i has on the first similarity; conversely, the smaller the IOU i, the less the s i affects the first degree of similarity. As described above, the smaller the IOU i, the less accurate the determined image region in the second merchandise image that matches the i-th sub-image may be, at this time, the effect of s i on the first similarity may be weakened by using formula (3), so as to improve the accuracy of the first similarity.
For example, the image region in the upper right corner of the second sub-image matches the first sub-image in the upper left corner of the first merchandise image, and the similarity between the image region in the upper right corner of the second sub-image and the first sub-image in the upper left corner of the first merchandise image is 0.9. But the image area matching of the upper left sub-image in the first merchandise image and the upper right image area in the second merchandise image is obviously incorrect. Accordingly, if the 0.9 is used to participate in the calculation of the first similarity, a certain calculation error is introduced. By using the formula (3), since the IOU of the image area in the upper left corner of the first sub-image and the upper right corner of the second sub-image in the first commodity image is 0, the foregoing 0.9 does not actually participate in the calculation of the first similarity, thereby reducing errors and improving the accuracy of the first similarity.
In an example of the embodiment of the present application, in order to further improve accuracy of the determined similarity between the first commodity image and the second commodity image, different manners may be adopted to divide the first commodity image, and the similarity between the first commodity image and the second commodity image is calculated according to the different division manners, so as to obtain a plurality of similarities, and finally determine the similarity between the first commodity image and the second commodity image according to the plurality of similarities.
In an example, after determining that the first similarity between the first commodity image and the second commodity image is obtained, the processing method for a commodity image provided by the embodiment of the present application may further include S201 to S205 shown in fig. 3, and fig. 3 is a flow chart of another processing method for a commodity image provided by the embodiment of the present application.
S201: dividing the first commodity image into c x d sub-images according to the mode of c x d.
In the embodiment of the present application, if the first commodity image is divided into a×b sub-images according to an average division manner when S102 is executed, c×d is a natural number greater than 1, and c and d are both natural numbers; a is not equal to c and/or b is not equal to d.
In the embodiment of the present application, the method of dividing the first merchandise image shown in step a is different from the method of dividing the first merchandise image described in S102. Thus, conditions for a not equal to c, and/or b not equal to d, need to be satisfied between a, b, c and d.
In the embodiment of the present application, c and d may be the same or different, and the embodiment of the present application is not particularly limited.
S202: and respectively determining an image area matched with each sub-image in the second commodity image aiming at each sub-image in the c x d sub-images.
S203: and acquiring the similarity of each sub-image of the c×d sub-images and the image area matched with each sub-image of the c×d sub-images in the determined second commodity image.
S204: and determining the second similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each of the c x d sub-images in the c x d sub-images and the determined second commodity image and the weight of each of the c x d sub-images.
With respect to the weight of each of the c×d sub-images, it should be noted that, when c×d is equal to 2, the relative positions of the two sub-images obtained by division in the first commodity image are the same, and there is no division between the center area and the edge area. When a is equal to 2 and b is equal to 2, the relative positions of the four sub-images obtained by division in the first commodity image are the same, and no division of the center area and the edge area is made. Thus, if both c and d are 2, or c×d is equal to 2, then the weights of each of the c×d sub-images are the same. Otherwise, the weight of the sub-image located in the central area of the first commodity image in the c×d sub-images is higher than the weight of the sub-image located in the edge area of the first commodity image in the c×d sub-images.
Regarding S202-S204, the implementation principle is similar to that of S103-S105 above, and specific implementation is referred to the description section of S103-S105 above, and will not be described in detail here.
S205: and determining a third similarity of the first commodity image and the second commodity image according to the first similarity and the second similarity.
After the first similarity and the second similarity are calculated, a third similarity between the first commodity image and the second commodity image may be determined according to the first similarity and the second similarity. In one example, the sum of the first similarity and the second similarity may be directly determined as the third similarity; in yet another example, an average value of the first similarity and the second similarity may be determined as the third similarity, which is not specifically limited in the embodiment of the present application.
Accordingly, in one example, after the third similarity is calculated, the third similarity may be used to determine that the merchandise included in the first merchandise image and the merchandise included in the second merchandise image are the same or similar merchandise. In one implementation, if the third similarity is higher than the first threshold, the commodity included in the first commodity image and the commodity included in the second commodity image may be considered to be the same or similar commodity. The first threshold mentioned herein may be set according to practical situations, and embodiments of the present application are not specifically limited.
Exemplary apparatus
Based on the commodity image processing method provided by the embodiment, the embodiment of the application also provides a device, and the device is described below with reference to the accompanying drawings.
Referring to fig. 4, the schematic structure of a commodity image processing apparatus according to an embodiment of the present application is shown. The apparatus 400 may specifically include, for example: a first acquisition unit 401, a first division unit 402, a first determination unit 403, a second acquisition unit 404, and a second determination unit 405.
A first acquiring unit 401, configured to acquire a first commodity image and a second commodity image, where the image sizes of the first commodity image and the second commodity image are the same;
a first dividing unit 402 for dividing the first commodity image into at least two sub-images;
A first determining unit 403, configured to determine, for each of the at least two sub-images, an image area in the second commodity image that matches the each sub-image;
A second obtaining unit 404, configured to obtain a similarity between the each sub-image and the determined image area in the second commodity image, where the image area matches the each sub-image;
A second determining unit 405, configured to determine a first similarity between the first commodity image and the second commodity image according to a similarity between the each sub-image and the determined image region matching the each sub-image in the second commodity image, and a weight of the each sub-image; wherein:
The weight of the sub-image located in the center area of the first commodity image is higher than the weight of the sub-image located in the edge area of the first commodity image.
In one implementation, the first dividing unit 402 is configured to:
Dividing the first commodity image into a x b sub-images according to a x b mode, wherein a and b are natural numbers, a x b is a natural number larger than 2, and a and b are not equal to 2 at the same time.
In an implementation, the at least two sub-images include a first sub-image, and the first determining unit 403 is configured to:
And determining an image area with the highest matching degree with the first sub-image in the second commodity image as an image area matched with the first sub-image.
In one implementation, the apparatus further comprises:
a first calculating unit configured to calculate a degree of overlap between each of the sub-images and the determined image region matching each of the sub-images, respectively;
the second determining unit 405 is configured to:
And determining the first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image, the overlapping degree of the image area matched with each sub-image in the second commodity image and the determined sub-image, and the weight of each sub-image.
In one implementation, the apparatus further comprises:
The second dividing unit is used for dividing the first commodity image into c x d sub-images according to a mode of c x d, wherein c x d is a natural number larger than 1, and c and d are both natural numbers; a is not equal to c and/or b is not equal to d;
A third determining unit, configured to determine, for each of the c×d sub-images, an image area in the second commodity image that matches with the each sub-image;
A third obtaining unit, configured to obtain a similarity between each of the c×d sub-images and the determined image area in the second commodity image, where the similarity matches each of the c×d sub-images;
A fourth determining unit configured to determine a second similarity of the first commodity image and the second commodity image according to a similarity of the image area in the c×d sub-images and the determined each sub-image in the second commodity image, the similarity being matched with each sub-image in the c×d sub-images, and a weight of each sub-image in the c×d sub-images;
And a fifth determining unit configured to determine a third similarity between the first commodity image and the second commodity image according to the first similarity and the second similarity.
In one implementation, if both c and d are 2, or c×d is equal to 2, then the weights of each of the c×d sub-images are the same; if a is greater than 2 and c and d are not equal to 2 at the same time, the weight of the sub-image located in the central area of the first commodity image in the c×d sub-images is higher than the weight of the sub-image located in the edge area of the first commodity image in the c×d sub-images.
In one implementation, the apparatus further comprises:
a sixth determining unit configured to determine that a commodity included in the first commodity image and a commodity included in the second commodity image are the same or similar commodity when a first similarity of the first commodity image and the second commodity image is higher than a first threshold.
Since the apparatus 400 is an apparatus corresponding to the method provided in the above method embodiment, the specific implementation of each unit of the apparatus 400 is the same as the above method embodiment, and therefore, with respect to the specific implementation of each unit of the apparatus 400, reference may be made to the description part of the above method embodiment, and details are not repeated herein.
The method provided by the embodiment of the application can be executed by the client or the server, and the client and the server for executing the method are respectively described below.
Fig. 5 shows a block diagram of a client 500. For example, client 500 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 5, a client 500 may include one or more of the following components: a processing component 502, a memory 504, a power supply component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the client 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 502 may include one or more processors 520 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interactions between the processing component 502 and other components. For example, the processing component 502 may include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
Memory 504 is configured to store various types of data to support operations at client 500. Examples of such data include instructions for any application or method operating on client 500, contact data, phonebook data, messages, pictures, video, and the like. The memory 504 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 506 provides power to the various components of the client 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the client 500.
The multimedia component 508 includes a screen between the client 500 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front-facing camera and/or a rear-facing camera. When the client 500 is in an operation mode, such as a photographing mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the client 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 504 or transmitted via the communication component 516. In some embodiments, the audio component 510 further comprises a speaker for outputting audio signals.
The I/O interface provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 514 includes one or more sensors for providing status assessment of various aspects to the client 500. For example, the sensor component 514 may detect the on/off state of the device 500, the relative positioning of components, such as the display and keypad of the client 500, the sensor component 514 may also detect the change in position of the client 500 or a component of the client 500, the presence or absence of user contact with the client 500, the orientation or acceleration/deceleration of the client 500, and the change in temperature of the client 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the client 500 and other devices, either wired or wireless. The client 500 may access a wireless network based on a communication standard, such as WiFi,2G, or 5G, or a combination thereof. In one exemplary embodiment, the communication part 516 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In one exemplary embodiment, the client 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the following methods:
Acquiring a first commodity image and a second commodity image, wherein the image sizes of the first commodity image and the second commodity image are the same;
dividing the first merchandise image into at least two sub-images;
for each sub-image in the at least two sub-images, respectively determining an image area matched with each sub-image in the second commodity image;
Acquiring the similarity of each sub-image and the image area matched with each sub-image in the determined second commodity image;
Determining a first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image and the weight of each sub-image; wherein:
The weight of the sub-image located in the center area of the first commodity image is higher than the weight of the sub-image located in the edge area of the first commodity image.
In one implementation, the dividing the first merchandise image into at least two sub-images includes:
Dividing the first commodity image into a x b sub-images according to a x b mode, wherein a and b are natural numbers, a x b is a natural number larger than 2, and a and b are not equal to 2 at the same time.
In one implementation, the at least two sub-images include a first sub-image, and the determining an image area in the second merchandise image that matches the first sub-image includes:
And determining an image area with the highest matching degree with the first sub-image in the second commodity image as an image area matched with the first sub-image.
In one implementation, the method further comprises:
respectively calculating the overlapping degree of each sub-image and the determined image area matched with each sub-image;
The determining the first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image and the weight of each sub-image comprises the following steps:
And determining the first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image, the overlapping degree of the image area matched with each sub-image in the second commodity image and the determined sub-image, and the weight of each sub-image.
In one implementation, the method further comprises:
Dividing the first commodity image into c x d sub-images according to a mode of c x d, wherein c x d is a natural number greater than 1, and c and d are both natural numbers; a is not equal to c and/or b is not equal to d;
For each sub-image in the c×d sub-images, respectively determining an image area matched with each sub-image in the second commodity image;
Acquiring similarity of each of the c×d sub-images and the determined image region in the second commodity image, which is matched with each of the c×d sub-images;
Determining a second similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each of the c x d sub-images in the c x d sub-images and the determined second commodity image in the c x d sub-images and the weight of each of the c x d sub-images;
And determining a third similarity of the first commodity image and the second commodity image according to the first similarity and the second similarity.
In one implementation, if both c and d are 2, or c×d is equal to 2, then the weights of each of the c×d sub-images are the same; if a is greater than 2 and c and d are not equal to 2 at the same time, the weight of the sub-image located in the central area of the first commodity image in the c×d sub-images is higher than the weight of the sub-image located in the edge area of the first commodity image in the c×d sub-images.
In one implementation, the method further comprises:
And when the first similarity of the first commodity image and the second commodity image is higher than a first threshold value, determining that the commodity included in the first commodity image and the commodity included in the second commodity image are the same or similar commodity.
Fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application. The server 600 may vary considerably in configuration or performance and may include one or more central processing units (central processing units, CPUs) 622 (e.g., one or more processors) and memory 632, one or more storage mediums 630 (e.g., one or more mass storage devices) that store applications 642 or data 644. Wherein memory 632 and storage medium 630 may be transitory or persistent storage. The program stored on the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 622 may be configured to communicate with a storage medium 630 and execute a series of instruction operations in the storage medium 630 on the server 600.
Still further, in one example, the central processor 422 may perform the following method:
Acquiring a first commodity image and a second commodity image, wherein the image sizes of the first commodity image and the second commodity image are the same;
dividing the first merchandise image into at least two sub-images;
for each sub-image in the at least two sub-images, respectively determining an image area matched with each sub-image in the second commodity image;
Acquiring the similarity of each sub-image and the image area matched with each sub-image in the determined second commodity image;
Determining a first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image and the weight of each sub-image; wherein:
The weight of the sub-image located in the center area of the first commodity image is higher than the weight of the sub-image located in the edge area of the first commodity image.
In one implementation, the dividing the first merchandise image into at least two sub-images includes:
Dividing the first commodity image into a x b sub-images according to a x b mode, wherein a and b are natural numbers, a x b is a natural number larger than 2, and a and b are not equal to 2 at the same time.
In one implementation, the at least two sub-images include a first sub-image, and the determining an image area in the second merchandise image that matches the first sub-image includes:
And determining an image area with the highest matching degree with the first sub-image in the second commodity image as an image area matched with the first sub-image.
In one implementation, the method further comprises:
respectively calculating the overlapping degree of each sub-image and the determined image area matched with each sub-image;
The determining the first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image and the weight of each sub-image comprises the following steps:
And determining the first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image, the overlapping degree of the image area matched with each sub-image in the second commodity image and the determined sub-image, and the weight of each sub-image.
In one implementation, the method further comprises:
Dividing the first commodity image into c x d sub-images according to a mode of c x d, wherein c x d is a natural number greater than 1, and c and d are both natural numbers; a is not equal to c and/or b is not equal to d;
For each sub-image in the c×d sub-images, respectively determining an image area matched with each sub-image in the second commodity image;
Acquiring similarity of each of the c×d sub-images and the determined image region in the second commodity image, which is matched with each of the c×d sub-images;
Determining a second similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each of the c x d sub-images in the c x d sub-images and the determined second commodity image in the c x d sub-images and the weight of each of the c x d sub-images;
And determining a third similarity of the first commodity image and the second commodity image according to the first similarity and the second similarity.
In one implementation, if both c and d are 2, or c×d is equal to 2, then the weights of each of the c×d sub-images are the same; if a is greater than 2 and c and d are not equal to 2 at the same time, the weight of the sub-image located in the central area of the first commodity image in the c×d sub-images is higher than the weight of the sub-image located in the edge area of the first commodity image in the c×d sub-images.
In one implementation, the method further comprises:
And when the first similarity of the first commodity image and the second commodity image is higher than a first threshold value, determining that the commodity included in the first commodity image and the commodity included in the second commodity image are the same or similar commodity.
The server 600 may also include one or more power supplies 626, one or more wired or wireless network interfaces 650, one or more input/output interfaces 656, one or more keyboards 656, and/or one or more operating systems 641, such as Windows ServerTM, mac OS XTM, unixTM, linuxTM, freeBSDTM, and the like.
The present application also provides a computer readable medium having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform the method of processing merchandise images provided by the method embodiment above.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims
The foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the application are intended to be included within the scope of the application.

Claims (16)

1. A method of processing an image of a commodity, the method comprising:
Acquiring a first commodity image and a second commodity image, wherein the image sizes of the first commodity image and the second commodity image are the same;
Dividing the first commodity image into at least two sub-images, specifically including: dividing the first commodity image into a x b sub-images according to a x b mode, wherein a and b are natural numbers, a x b is a natural number larger than 2, and a and b are not equal to 2 at the same time; the at least two sub-images include a first sub-image;
for each sub-image in the at least two sub-images, respectively determining an image area matched with each sub-image in the second commodity image;
Acquiring the similarity of each sub-image and the image area matched with each sub-image in the determined second commodity image;
respectively calculating the overlapping degree of each sub-image and the image area matched with each sub-image in the determined second commodity image;
Determining a first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image, the overlapping degree of the image area matched with each sub-image in the second commodity image and the determined sub-image, and the weight of each sub-image; wherein:
the closer the image area matched with the first sub-image in the second commodity image is to the position of the first sub-image in the first commodity image, the larger the overlapping degree is, otherwise, the smaller the overlapping degree is; the weight of the sub-image located in the center area of the first commodity image is higher than the weight of the sub-image located in the edge area of the first commodity image.
2. The method of claim 1, wherein the determining the image region in the second merchandise image that matches the first sub-image comprises:
And determining an image area with the highest matching degree with the first sub-image in the second commodity image as an image area matched with the first sub-image.
3. The method according to claim 1, wherein the method further comprises:
Dividing the first commodity image into c x d sub-images according to a mode of c x d, wherein c x d is a natural number greater than 1, and c and d are both natural numbers; a is not equal to c and/or b is not equal to d;
For each sub-image in the c×d sub-images, respectively determining an image area matched with each sub-image in the second commodity image;
Acquiring similarity of each of the c×d sub-images and the determined image region in the second commodity image, which is matched with each of the c×d sub-images;
Determining a second similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each of the c x d sub-images in the c x d sub-images and the determined second commodity image in the c x d sub-images and the weight of each of the c x d sub-images;
And determining a third similarity of the first commodity image and the second commodity image according to the first similarity and the second similarity.
4. A method according to claim 3, wherein if both c and d are 2, or c x d is equal to 2, then the weights of each of the c x d sub-images are the same; if a is greater than 2 and c and d are not equal to 2 at the same time, the weight of the sub-image located in the central area of the first commodity image in the c×d sub-images is higher than the weight of the sub-image located in the edge area of the first commodity image in the c×d sub-images.
5. The method according to claim 1, wherein the method further comprises:
And when the first similarity of the first commodity image and the second commodity image is higher than a first threshold value, determining that the commodity included in the first commodity image and the commodity included in the second commodity image are the same or similar commodity.
6. A commodity image processing apparatus, the apparatus comprising:
A first acquisition unit configured to acquire a first commodity image and a second commodity image, the first commodity image and the second commodity image having the same image size;
The first dividing unit is configured to divide the first commodity image into at least two sub-images, and specifically includes: dividing the first commodity image into a x b sub-images according to a x b mode, wherein a and b are natural numbers, a x b is a natural number larger than 2, and a and b are not equal to 2 at the same time; the at least two sub-images include a first sub-image;
a first determining unit, configured to determine, for each of the at least two sub-images, an image area in the second commodity image that matches with the each sub-image;
A second obtaining unit, configured to obtain a similarity between the each sub-image and the determined image area in the second commodity image, where the similarity matches with the each sub-image;
A first calculating unit, configured to calculate, respectively, a degree of overlapping of each sub-image and an image area that matches each sub-image in the determined second commodity image;
a second determining unit configured to determine a first similarity of the first commodity image and the second commodity image according to a similarity of the image areas matched with the each sub-image in the each sub-image and the determined second commodity image, an overlapping degree of the image areas matched with the each sub-image in the each sub-image and the determined second commodity image, and a weight of the each sub-image; wherein:
the closer the image area matched with the first sub-image in the second commodity image is to the position of the first sub-image in the first commodity image, the larger the overlapping degree is, otherwise, the smaller the overlapping degree is; the weight of the sub-image located in the center area of the first commodity image is higher than the weight of the sub-image located in the edge area of the first commodity image.
7. The apparatus according to claim 6, wherein the first determining unit is configured to:
And determining an image area with the highest matching degree with the first sub-image in the second commodity image as an image area matched with the first sub-image.
8. The apparatus of claim 6, wherein the apparatus further comprises:
the second dividing unit is used for dividing the first commodity image into c x d sub-images according to a mode of c x d, wherein c x d is a natural number larger than 1, and c and d are both natural numbers; a is not equal to c and/or b is not equal to d;
A third determining unit, configured to determine, for each of the c×d sub-images, an image area in the second commodity image that matches with the each sub-image;
A third obtaining unit, configured to obtain a similarity between each of the c×d sub-images and the determined image area in the second commodity image, where the similarity matches each of the c×d sub-images;
A fourth determining unit configured to determine a second similarity of the first commodity image and the second commodity image according to a similarity of the image area in the c×d sub-images and the determined each sub-image in the second commodity image, the similarity being matched with each sub-image in the c×d sub-images, and a weight of each sub-image in the c×d sub-images;
And a fifth determining unit configured to determine a third similarity between the first commodity image and the second commodity image according to the first similarity and the second similarity.
9. The apparatus of claim 8, wherein if both c and d are 2, or c x d is equal to 2, then the weights of each of the c x d sub-images are the same; if a is greater than 2 and c and d are not equal to 2 at the same time, the weight of the sub-image located in the central area of the first commodity image in the c×d sub-images is higher than the weight of the sub-image located in the edge area of the first commodity image in the c×d sub-images.
10. The apparatus of claim 6, wherein the apparatus further comprises:
a sixth determining unit configured to determine that a commodity included in the first commodity image and a commodity included in the second commodity image are the same or similar commodity when a first similarity of the first commodity image and the second commodity image is higher than a first threshold.
11. A commodity image processing apparatus comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors, the one or more programs comprising instructions for:
Acquiring a first commodity image and a second commodity image, wherein the image sizes of the first commodity image and the second commodity image are the same;
Dividing the first commodity image into at least two sub-images, specifically including: dividing the first commodity image into a x b sub-images according to a x b mode, wherein a and b are natural numbers, a x b is a natural number larger than 2, and a and b are not equal to 2 at the same time; the at least two sub-images include a first sub-image;
for each sub-image in the at least two sub-images, respectively determining an image area matched with each sub-image in the second commodity image;
Acquiring the similarity of each sub-image and the image area matched with each sub-image in the determined second commodity image;
respectively calculating the overlapping degree of each sub-image and the image area matched with each sub-image in the determined second commodity image;
Determining a first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in the second commodity image and the determined sub-image, the overlapping degree of the image area matched with each sub-image in the second commodity image and the determined sub-image, and the weight of each sub-image; wherein:
the closer the image area matched with the first sub-image in the second commodity image is to the position of the first sub-image in the first commodity image, the larger the overlapping degree is, otherwise, the smaller the overlapping degree is; the weight of the sub-image located in the center area of the first commodity image is higher than the weight of the sub-image located in the edge area of the first commodity image.
12. The apparatus of claim 11, wherein the means for determining the image region in the second merchandise image that matches the first sub-image comprises:
And determining an image area with the highest matching degree with the first sub-image in the second commodity image as an image area matched with the first sub-image.
13. The apparatus of claim 11, wherein the operations further comprise:
Dividing the first commodity image into c x d sub-images according to a mode of c x d, wherein c x d is a natural number greater than 1, and c and d are both natural numbers; a is not equal to c and/or b is not equal to d;
For each sub-image in the c×d sub-images, respectively determining an image area matched with each sub-image in the second commodity image;
Acquiring similarity of each of the c×d sub-images and the determined image region in the second commodity image, which is matched with each of the c×d sub-images;
Determining a second similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each of the c x d sub-images in the c x d sub-images and the determined second commodity image in the c x d sub-images and the weight of each of the c x d sub-images;
And determining a third similarity of the first commodity image and the second commodity image according to the first similarity and the second similarity.
14. The apparatus of claim 13, wherein if both c and d are 2, or c x d is equal to 2, then the weights of each of the c x d sub-images are the same; if a is greater than 2 and c and d are not equal to 2 at the same time, the weight of the sub-image located in the central area of the first commodity image in the c×d sub-images is higher than the weight of the sub-image located in the edge area of the first commodity image in the c×d sub-images.
15. The apparatus of claim 11, wherein the operations further comprise:
And when the first similarity of the first commodity image and the second commodity image is higher than a first threshold value, determining that the commodity included in the first commodity image and the commodity included in the second commodity image are the same or similar commodity.
16. A computer readable medium having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform the method of any of claims 1 to 5.
CN202011059920.2A 2020-09-30 Commodity image processing method and device Active CN112200783B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011059920.2A CN112200783B (en) 2020-09-30 Commodity image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011059920.2A CN112200783B (en) 2020-09-30 Commodity image processing method and device

Publications (2)

Publication Number Publication Date
CN112200783A CN112200783A (en) 2021-01-08
CN112200783B true CN112200783B (en) 2024-06-28

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615042A (en) * 2016-12-09 2018-10-02 炬芯(珠海)科技有限公司 The method and apparatus and player of video format identification
CN110135517A (en) * 2019-05-24 2019-08-16 北京百度网讯科技有限公司 For obtaining the method and device of vehicle similarity

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615042A (en) * 2016-12-09 2018-10-02 炬芯(珠海)科技有限公司 The method and apparatus and player of video format identification
CN110135517A (en) * 2019-05-24 2019-08-16 北京百度网讯科技有限公司 For obtaining the method and device of vehicle similarity

Similar Documents

Publication Publication Date Title
CN110688951B (en) Image processing method and device, electronic equipment and storage medium
CN106651955B (en) Method and device for positioning target object in picture
JP6134446B2 (en) Image division method, image division apparatus, image division device, program, and recording medium
US10452890B2 (en) Fingerprint template input method, device and medium
EP3188094A1 (en) Method and device for classification model training
EP3057304B1 (en) Method and apparatus for generating image filter
CN110647834A (en) Human face and human hand correlation detection method and device, electronic equipment and storage medium
US9924226B2 (en) Method and device for processing identification of video file
US20210224592A1 (en) Method and device for training image recognition model, and storage medium
CN107944367B (en) Face key point detection method and device
CN107464253B (en) Eyebrow positioning method and device
CN106485567B (en) Article recommendation method and device
US11749273B2 (en) Speech control method, terminal device, and storage medium
US10248855B2 (en) Method and apparatus for identifying gesture
US10083346B2 (en) Method and apparatus for providing contact card
CN106909893B (en) Fingerprint identification method and device
CN108009563B (en) Image processing method and device and terminal
US10297015B2 (en) Method, device and computer-readable medium for identifying feature of image
US20150317800A1 (en) Method and device for image segmentation
US10229165B2 (en) Method and device for presenting tasks
US9665925B2 (en) Method and terminal device for retargeting images
WO2019105135A1 (en) Method, apparatus, and device for switching user interface
CN111753539B (en) Method and device for identifying sensitive text
CN112200783B (en) Commodity image processing method and device
CN112200783A (en) Commodity image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant