CN109740459B - Image difference comparison method and system and unmanned vending device - Google Patents

Image difference comparison method and system and unmanned vending device Download PDF

Info

Publication number
CN109740459B
CN109740459B CN201811570474.4A CN201811570474A CN109740459B CN 109740459 B CN109740459 B CN 109740459B CN 201811570474 A CN201811570474 A CN 201811570474A CN 109740459 B CN109740459 B CN 109740459B
Authority
CN
China
Prior art keywords
image
difference
position information
area
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811570474.4A
Other languages
Chinese (zh)
Other versions
CN109740459A (en
Inventor
张发恩
秦永强
吴佳洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ainnovation Hefei Technology Co ltd
Original Assignee
Ainnovation Hefei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ainnovation Hefei Technology Co ltd filed Critical Ainnovation Hefei Technology Co ltd
Priority to CN201811570474.4A priority Critical patent/CN109740459B/en
Publication of CN109740459A publication Critical patent/CN109740459A/en
Application granted granted Critical
Publication of CN109740459B publication Critical patent/CN109740459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to an image difference comparison method, a system and an unmanned vending device, comprising the following steps: providing a first image and a second image to be subjected to difference comparison; calculating object position information and category information in the first image and the second image by using a detection and classification algorithm; solving a difference area in the first image and the second image by using an image processing method; solving intersection of the object position information and the difference region in the first image and the second image to obtain a key attention region; and performing fine-grained classification on the objects contained in the key attention area so as to further combine the class information of the objects to obtain the class of the objects in the key attention area. The comparison method improves the identification accuracy of the object change, reduces the area range of the neural network needing to be processed, and lightens the training of the neural network.

Description

Image difference comparison method and system and unmanned vending device
[ technical field ] A method for producing a semiconductor device
The invention belongs to the field of image recognition, and relates to an image difference comparison method and system and an unmanned vending device.
[ background of the invention ]
In the field of unmanned vending, an image recognition technology is generally used, and image difference comparison is performed on two situations before and after an object moves through the image recognition technology, so that a difference area is obtained to recognize the selling situation of the object.
In the existing image difference comparison method, under the condition that the two images contain the same object and different objects, the method for calculating the difference between the two images is to directly calculate the whole content inside the two images and then calculate the difference, but the overall scheme causes a large amount of calculation for image processing, causes a large burden on a processing system, and affects the efficiency of image difference comparison.
Therefore, it is common practice to find the difference area between two images according to the image processing method, reduce the search space, and then find the object difference between the images. However, because the differences of the objects in the image include differences caused by movement of the objects, absence of the objects, or addition of the objects, the differences caused by the absence of the objects in the recognized image often form false results.
[ summary of the invention ]
In order to overcome the problem that the difference comparison result of the conventional image difference comparison method is inaccurate, the invention provides an image difference comparison method, an image difference comparison system and an unmanned vending device.
The technical scheme for solving the technical problem of the invention is to provide an image difference comparison method, which comprises the following steps:
step S1: providing a first image and a second image to be subjected to difference comparison;
step S2: calculating object position information and category information in the first image and the second image by using a detection and classification algorithm;
step S3: calculating a difference region in the first image and the second image by using an image processing method, wherein the difference region is a region for positioning the variation and the deletion of the object by using the position information of the object;
step S4: solving intersection of the position information of the object in the first image and the second image and the difference region, wherein the difference occurs when the intersection is solved due to object deletion, so as to obtain a key attention region, and the key attention region is the difference caused by the object deletion; and
step S5: performing fine-grained classification on the objects contained in the key attention area to further combine the class information of the objects to obtain the class of the objects in the key attention area;
step S6: checking the result of the step S5 with the overall classification, and carrying out confidence test on the result;
the step S4 specifically includes the following steps:
step S41: superposing the position information of the first image and the difference area of the first image to obtain the position information of an object in the difference area of the first image; superposing the position information of the second image with the difference area of the second image to obtain the position information of the object in the difference area of the second image;
step S42: intersecting the position information of the object within the difference region of the first image obtained in step S41 with the position information of the object within the difference region of the second image to obtain the key region of interest.
Preferably, the step S3 specifically includes the following steps: dividing the first image into an R1 map, a G1 map and a B1 map, and the second image into an R2 map, a G2 map and a B2 map by using RGB channels; the difference regions between the first image and the second image were obtained by comparing the R1 map with the R2 map, the G1 map with the G2 map, and the B1 map with the B2 map, respectively.
Preferably, after the step S4 is performed, the two images are subjected to denoising processing to reduce the image difference due to noise before the step S5 is performed.
Preferably, after the object category in the key attention area is obtained, the object category in the key attention area is verified to detect the accuracy of the result;
the step S6 specifically includes the following steps:
step S61: calculating the difference value between the weight of all objects in the first image and the weight of all objects in the second image;
step S62: judging whether the difference value is within a preset weight value range corresponding to the variable object, if so, outputting to step S63, and if not, outputting to step S64;
step S63: outputting a final result of the difference comparison;
step S64: returning to step S1, the difference comparison is performed again.
The present invention further provides an image difference comparison system for solving the above technical problems, comprising: the image acquisition unit is used for acquiring a first image and a second image to be subjected to difference comparison; the image processing unit is used for solving the position information and the category information of the object in the first image and the second image by using a detection and classification algorithm; a first difference obtaining unit configured to obtain a difference region between the first image and the second image by using an image processing method, the difference region being a region where the variation and the deletion of the object are located by using the position information of the object; the second difference obtaining unit is used for obtaining an intersection of the position information of the object in the first image and the second image and the difference area, wherein the difference occurs when the intersection is obtained due to object loss, so as to obtain a key attention area, and the key attention area is the difference caused by the object loss; the image identification unit is used for performing fine-grained classification on the objects contained in the key attention area so as to further obtain the object category in the key attention area by combining the category information of the objects; the manner of obtaining the key attention area by the second difference obtaining unit is specifically as follows: superposing the position information of the first image and the difference area of the first image to obtain the position information of an object in the difference area of the first image; superposing the position information of the second image with the difference area of the second image to obtain the position information of the object in the difference area of the second image; and solving the intersection of the position information of the object in the difference area of the first image and the position information of the object in the difference area of the second image to obtain the key attention area.
Preferably, the first difference obtaining unit includes: a color separation unit for acquiring images of the first and second images obtained through the R, G, and B channels, respectively, dividing the first image into an R1 diagram, a G1 diagram, and a B1 diagram, and dividing the second image into an R2 diagram, a G2 diagram, and a B2 diagram; and the image comparison unit is used for comparing the results of the color separation unit to obtain a difference area in the first image and the second image.
Preferably, the image processing unit further includes: and the denoising unit is used for denoising the two images after the key attention area is obtained for the first image and the second image so as to reduce the image difference caused by noise.
In order to solve the above technical problems, the present invention further provides an automatic vending apparatus, comprising: a plurality of cameras capable of photographing objects in the unmanned vending apparatus; the image acquisition module is used for acquiring a plurality of images of the articles in the unmanned vending device, wherein the images correspond to the first image and the second image before and after the articles change; and the image processing module is used for executing the image difference comparison method.
Preferably, the unmanned vending apparatus further comprises: the weight checking module is used for solving the difference value between the weight of the object in the first image and the weight of the object in the second image, judging whether the difference value is within the preset weight range of the object of the type, and if so, outputting the object type of the key attention area; if not, difference comparison is carried out.
In summary, compared with the conventional image difference comparison method, the method of the present invention has the advantages that the difference between two images is found and then the object difference between the images is obtained by using the image processing, but the object in the difference area may include the difference of the superposition, the movement and the missing of the object, thereby causing errors. The method accurately calculates the key attention area of the object missing, namely the number of the taken objects and the corresponding positions by searching the difference area of the object between the two images and calculating the intersection of the difference area and the position information of the object. The difference area generated by the movement or superposition of the object is filtered, and the identification accuracy of the object change is improved. Based on the fine-grained classification of the neural network, the classes of the objects in the key attention area are obtained, the calculation amount required by the fine-grained classification is reduced, and the speed and the accuracy of the object identification and classification in the key attention area are improved.
[ description of the drawings ]
Fig. 1 is an overall flowchart of an image difference comparison method according to a first embodiment of the present invention;
fig. 2 is a detailed flowchart of step S2 of an image difference comparison method according to a first embodiment of the present invention;
fig. 3 is a detailed flowchart of step S3 of an image difference comparison method according to a first embodiment of the present invention;
fig. 4 is a detailed flowchart of step S4 of an image difference comparison method according to a first embodiment of the present invention;
fig. 5 is a detailed flowchart of step S5 of an image difference comparison method according to a first embodiment of the present invention;
fig. 6 is a detailed flowchart of step S6 of an image difference comparison method according to a first embodiment of the present invention;
FIG. 7 is a detailed flowchart of step S33 in FIG. 3;
FIG. 8 is a block diagram of an image contrast system according to a second embodiment of the present invention;
FIG. 9a is a schematic diagram of a first image in an image contrast system according to the present invention;
FIG. 9b is a schematic diagram of a second image in an image contrast system according to the present invention;
fig. 10 is a block diagram illustrating an image difference comparison method applied to an automatic vending machine according to a third embodiment of the present invention.
The attached drawings indicate the following:
1. an image acquisition unit; 2. an image processing unit; 3. a first difference obtaining unit; 4. a second difference obtaining unit; 5. an image recognition unit;
10. an image acquisition module; 20. an image processing module; 30. a weight check module;
100. a first difference region; 200. a second difference region; 300. a third difference region.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, an image difference comparing method can be divided into the following steps:
step S1: a first image and a second image to be differentially compared are provided.
Step S2: and solving the position information and the class information of the object in the first image and the second image by using a detection and classification algorithm.
Step S3: and solving the difference area in the first image and the second image by using an image processing method.
Step S4: and intersecting the position information of the object in the first image and the second image with the difference region to obtain a key attention region.
Step S5: and performing fine-grained classification on the objects contained in the key attention area so as to further combine the class information of the objects to obtain the class of the objects in the key attention area. Specifically, before the key attention area is subjected to fine-grained classification to obtain the category of the object in the key attention area, a plurality of model data of the object in the image can be acquired, so that the neural network is trained by using the plurality of model data to obtain a plurality of trained model data. And analyzing the key attention area according to the obtained plurality of model data to obtain the accurate object type in the key attention area, namely obtaining the difference comparison result of the two images. And step S6: and checking the result of the step S5 with the overall classification, and checking the confidence of the result. In this embodiment, the checking of the confidence level may include using weight information to assist verification, or other verification methods.
In step S1, taking the example of beverages of various types, the first image and the second image contain carbonated beverages and fruit beverages, and the position information and the type information of the carbonated beverages and the fruit beverages in the two images are obtained by a detection and classification algorithm.
Referring to fig. 2, step S2: the step S2 specifically includes steps S21 to S24, in which object position information and category information in the first image and the second image are found by a detection and classification algorithm. It is understood that steps S21-S24 are only one embodiment of this example, and the embodiment is not limited to steps S21-S24.
Step S21: and carrying out graying processing on the two images to obtain a binary image.
Step S22: and then, denoising the binary image to reduce errors caused by image recognition due to noise.
Step S23: and obtaining a plurality of object position information conforming to the mathematical model in the binarization image area in the image processing process through a preset mathematical model of the edge profile of the object. And
step S24: and matching the object to be identified in the binary image with a preset object template by using a template matching algorithm to obtain the rough classification of the object.
To explain further, a template matching algorithm is used to select the best match as a result, and position information of the portion belonging to the carbonated beverage and position information of the portion belonging to the fruit juice beverage in the image are obtained. However, at this time, it is not clear what brand of carbonated beverage is in the carbonated beverage, and what brand of fruit juice beverage is in the fruit juice beverage.
Referring to fig. 3, step S3: the step S3 specifically includes steps S31 to S33, in which object position information and category information in the first image and the second image are found by a detection and classification algorithm. It is understood that steps S31-S33 are only one embodiment of this example, and the embodiment is not limited to steps S31-S33.
Step S31, using RGB channels, dividing each of the images into R, G, B three images of three colors. Specifically, the first image is divided into an R1 map, a G1 map, and a B1 map, and the second image is divided into an R2 map, a G2 map, and a B2 map. Wherein, after passing through the R channel, the first image becomes an R1 map with the bottom color of red, and the second image becomes an R2 map with the bottom color of red; after passing through the G channel, the first image becomes a G1 map with the ground color of green, and the second image becomes a G2 map with the ground color of green; after passing through the B channel, the first image becomes a B1 image with the background color of blue, and the second image becomes a B2 image with the background color of blue.
Step S32: the pixel difference regions after three comparisons were obtained by comparing the R1 map with the R2 map, the G1 map with the G2 map, and the B1 map with the B2 map. And
step S33: and denoising the pixel difference region. The denoising process comprises filter denoising, morphological denoising and noise removal based on experience.
The difference region between the first image and the second image is obtained by removing the pixel difference caused by the noise in the image and reducing the recognition error, but the difference region includes the variation such as the movement of the object, the absence of the object, or the addition of the object, and further processing is required for the difference region.
Referring to fig. 4, step S4: intersecting the object position information in the first image and the second image with the difference region to obtain a key attention region, wherein the step S4 specifically includes steps S41 to S42. It is understood that steps S41-S42 are only one embodiment of this example, and the embodiment is not limited to steps S41-S42.
Step S41: superposing the position information of the first image and the difference area of the first image to obtain the position information of an object in the difference area of the first image;
and superposing the position information of the second image and the difference area of the second image to obtain the position information of the object in the difference area of the second image.
Step S42: and intersecting the position information of the object in the difference region of the first image obtained in the step S41 with the position information of the object in the difference region of the second image to obtain the key attention region, wherein the key attention region is the difference caused by the missing object. Since the object position information obtained in step S1 corresponds to the type information of the object, that is, each of carbonated beverages and fruit beverages corresponds to the position information, the intersection is obtained to obtain the key region of interest.
Referring to fig. 5, in step S5, the objects included in the key Attention area are classified into fine-grained classes, so as to further obtain the object class in the key Attention area according to the class information of the objects, in this embodiment, step S5 is exemplified by a Convolutional Neural Network (recursive Attention probabilistic Neural Network-RA-CNN) based on a recursive Attention model, but is not limited to this manner. Step S5 specifically includes steps S51 to S55.
Step S51: and inputting the key attention area into a feature extraction network to obtain a feature map corresponding to the key attention area.
Step S52: and inputting the characteristic diagrams into the three convolutional layers in sequence to obtain attention area information. And
step S53: and matching the obtained attention area information with a preset object model to obtain the category of the object in the key attention area, for example, the carbonated beverage in the key attention area can be confirmed to belong to brand A or brand B after being classified in a fine-grained manner.
Optionally, the embodiment may further perform multiple training on the model data of the multiple objects obtained after the training, so as to improve the recognition efficiency of the objects.
Optionally, the model data of the plurality of objects obtained after the training may be replaced or added, and the categories of the plurality of objects may be identified according to needs.
And the fine-grained classification is carried out on the key concerned area, so that the area for carrying out the fine-grained classification on the image overall situation is reduced, and the training of a neural network model is lightened.
Referring to fig. 6, the result of step S5 is checked against the overall classification, and the confidence level of the result is checked, wherein step S6 specifically includes steps S61 to S64. It is understood that steps S61-S64 are only one embodiment of this example, and the embodiment is not limited to steps S61-S64.
Step S61: and calculating the difference value between the weight of all the objects in the first image and the weight of all the objects in the second image.
Step S62: and judging whether the difference value is within the preset weight value range corresponding to the variable object, if so, outputting to step S63, and if not, outputting to step S64.
Step S63: and outputting the final result of the difference comparison. And
step S64: returning to step S1, the difference comparison is performed again.
Referring to fig. 7, step S33: while the denoising process is performed on the pixel difference region, in the present embodiment, the step S33 is exemplified by the Median Filter (Median Filter) denoising, but is not limited thereto. Step S33 specifically includes steps S331 to S332.
Step S331: the pixel difference region is input to a median filter. And
step S332: the median filter acquires data of the difference region, each point in the difference region is analyzed, and the pixel with larger difference of pixel values changes to a value close to the surrounding pixel values, so that the isolated noise point is eliminated.
The difference region between the first image and the second image is obtained by removing the pixel difference caused by the noise in the image and reducing the recognition error, but the difference region includes the variation such as the movement of the object, the absence of the object, or the addition of the object, and further processing is required for the difference region.
A second embodiment of the present invention provides an image difference comparison system for implementing the above-described image difference comparison method. FIG. 8 is a schematic diagram of an image contrast system according to an embodiment of the present invention. As shown in fig. 8, the image difference comparison system may include: an image acquisition unit 1, an image processing unit 2, an image recognition unit 3, and an output unit 4.
The image acquiring unit 1 is used for acquiring a first image and a second image to be subjected to difference comparison.
And the image processing unit 2 is used for solving the position information and the class information of the object in the first image and the second image by using a detection and classification algorithm.
A first difference obtaining unit 3, configured to obtain a difference region in the first image and the second image by using an image processing method.
And the second difference solving unit 4 is used for solving the intersection of the position information of the object in the first image and the second image and the difference region so as to obtain the key attention region. And
and the image identification unit 5 is configured to perform fine-grained classification on the objects included in the key attention area, so as to further obtain the object category in the key attention area by combining with the category information of the objects.
Specifically, the first difference obtaining unit 3 includes a color separation unit and an image comparison unit. The color separation unit is used for respectively acquiring images of a first image and a second image obtained through an R channel, a G channel and a B channel, namely acquiring an R1 diagram, a G1 diagram and a B1 diagram of the first image, and acquiring an R2 diagram, a G2 diagram and a B2 diagram of the second image.
And the image comparison unit is used for comparing the results of the color separation unit to obtain a difference area in the first image and the second image.
Specifically, please refer to fig. 9a and 9b, which are combined together, and take the variation of different patterns in the figures as an example, the result of the color separation unit is compared to obtain three difference regions as shown in fig. 9a and 9b, where fig. 9a is a first image, fig. 9b is a second image, and the three difference regions are divided into a first difference region 100, a second difference region 200 and a third difference region 300. In conjunction with fig. 9a and 9b, it can be seen that the patterns in the first and third difference areas 100 and 300 have been shifted, while the patterns in the second difference area 200 have been missing. And solving intersection of the position information of the object in the first image and the second image and the difference region, namely positioning a changed region of the object by using the position information of the object, wherein the difference occurs in the intersection due to the missing of the graph in the second difference region 200, and the second difference region 200 is the solved difference region, namely solving a key attention region, so that the difference region caused by the missing of the object can be accurately positioned.
Optionally, the image processing unit 2 further includes a denoising unit, configured to perform denoising processing on the two images after the key attention region is found for the first image and the second image, so as to reduce an image difference caused by noise.
A third embodiment of the present invention provides an unmanned aerial vehicle apparatus for implementing the above-described image difference comparison method.
Fig. 10 is a schematic view of the unmanned vending apparatus. As shown in fig. 10, the unmanned vending apparatus may include a plurality of cameras that may photograph the contents of the unmanned vending apparatus, an image acquisition module 10, an image processing module 20, and a weight verification module 30.
The image acquisition module 10 is configured to acquire a plurality of images of an article in the automatic vending apparatus, where the images correspond to the first image and the second image before and after the article changes.
And the image processing module 20 is configured to execute the steps of the image difference comparison method, and obtain a specific position where the object is absent and an object type where the object is absent in the image.
A weight check module 30 for calculating a difference between the weight of the object in the first image and the weight of the object in the second image.
When the difference value is within the preset weight range of the objects of the category, the output result is true, and the object category of the key attention area is output; and when the difference value is out of the preset weight range of the objects of the category, the object category of the key concerned area is false, and the difference comparison is carried out again.
Specifically, the weight verification module 30 includes a weight sensor, and the weight sensor records first weight data of the whole device when corresponding to the state of the first image. When the state of the second image is corresponded, the second weight data of the whole device is recorded. And (3) calculating the difference between the first weight data and the second weight data, for example, if the weight of a bottle of juice beverage is 500g, the weight range of the bottle of juice beverage is preset to be 400-600 g, and if the weight of the bottle of juice beverage is taken out, the difference between the first weight data and the second weight data is within the weight range of 400-600 g, and the object class of the key region of interest is output to be true. And otherwise, returning to carry out the difference comparison again if the object type of the key attention area is false.
In summary, compared with the conventional image difference comparison method, the method of the present invention has the advantages that the difference between two images is found and then the object difference between the images is obtained by using the image processing, but the object in the difference area may include the difference of the superposition, the movement and the missing of the object, thereby causing errors. The method accurately calculates the key attention area of the object missing, namely the number of the taken objects and the corresponding positions by searching the difference area of the object between the two images and calculating the intersection of the difference area and the position information of the object. The difference area generated by the movement or superposition of the object is filtered, and the identification accuracy of the object change is improved. Based on the fine-grained classification of the neural network, the classes of the objects in the key attention area are obtained, the calculation amount required by the fine-grained classification is reduced, and the speed and the accuracy of the object identification and classification in the key attention area are improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. An image difference comparison method is characterized by comprising the following steps:
step S1: providing a first image and a second image to be subjected to difference comparison;
step S2: calculating object position information and category information in the first image and the second image by using a detection and classification algorithm;
step S3: calculating a difference region in the first image and the second image by using an image processing method, wherein the difference region is a region for positioning the variation and the deletion of the object by using the position information of the object;
step S4: solving intersection of the position information of the object in the first image and the second image and the difference region, wherein the difference occurs when the intersection is solved due to object deletion, so as to obtain a key attention region, and the key attention region is the difference caused by the object deletion; and
step S5: performing fine-grained classification on the objects contained in the key attention area to further combine the class information of the objects to obtain the class of the objects in the key attention area;
step S6: checking the result of the step S5 with the overall classification, and carrying out confidence test on the result;
the step S4 specifically includes the following steps:
step S41: superposing the position information of the first image and the difference area of the first image to obtain the position information of an object in the difference area of the first image; superposing the position information of the second image with the difference area of the second image to obtain the position information of the object in the difference area of the second image;
step S42: intersecting the position information of the object within the difference region of the first image obtained in step S41 with the position information of the object within the difference region of the second image to obtain the key region of interest.
2. An image difference comparison method as claimed in claim 1, characterized in that: the step S3 specifically includes the following steps:
dividing the first image into an R1 map, a G1 map and a B1 map, and the second image into an R2 map, a G2 map and a B2 map by using RGB channels; and
the difference regions between the first image and the second image were obtained by comparing the R1 map with the R2 map, the G1 map with the G2 map, and the B1 map with the B2 map, respectively.
3. An image difference comparison method as claimed in claim 1, characterized in that: after step S4 is performed, before step S5 is performed, denoising processing is performed on the two images to reduce image differences due to noise.
4. An image difference comparison method as claimed in claim 1, characterized in that: after the object types in the key attention area are obtained, carrying out weight verification on the object types in the key attention area so as to detect the accuracy of a result;
the step S6 specifically includes the following steps:
step S61: calculating the difference value between the weight of all objects in the first image and the weight of all objects in the second image;
step S62: judging whether the difference value is within a preset weight value range corresponding to the variable object, if so, outputting to step S63, and if not, outputting to step S64;
step S63: outputting a final result of the difference comparison;
step S64: returning to step S1, the difference comparison is performed again.
5. An image contrast system, comprising:
the image acquisition unit is used for acquiring a first image and a second image to be subjected to difference comparison;
the image processing unit is used for solving the position information and the category information of the object in the first image and the second image by using a detection and classification algorithm;
a first difference obtaining unit configured to obtain a difference region between the first image and the second image by using an image processing method, the difference region being a region where the variation and the deletion of the object are located by using the position information of the object;
the second difference obtaining unit is used for obtaining an intersection of the position information of the object in the first image and the second image and the difference area, wherein the difference occurs when the intersection is obtained due to object loss, so as to obtain a key attention area, and the key attention area is the difference caused by the object loss;
the image identification unit is used for performing fine-grained classification on the objects contained in the key attention area so as to further obtain the object category in the key attention area by combining the category information of the objects;
the manner of obtaining the key attention area by the second difference obtaining unit is specifically as follows: superposing the position information of the first image and the difference area of the first image to obtain the position information of an object in the difference area of the first image; superposing the position information of the second image with the difference area of the second image to obtain the position information of the object in the difference area of the second image; and solving the intersection of the position information of the object in the difference area of the first image and the position information of the object in the difference area of the second image to obtain the key attention area.
6. The image difference comparison system as claimed in claim 5, wherein said first difference obtaining unit comprises:
a color separation unit for acquiring images of the first and second images obtained through the R, G, and B channels, respectively, dividing the first image into an R1 diagram, a G1 diagram, and a B1 diagram, and dividing the second image into an R2 diagram, a G2 diagram, and a B2 diagram;
and the image comparison unit is used for comparing the results of the color separation unit to obtain a difference area in the first image and the second image.
7. The image disparity system as claimed in claim 6, wherein the image processing unit further comprises:
and the denoising unit is used for denoising the two images after the key attention area is obtained for the first image and the second image so as to reduce the image difference caused by noise.
8. An unmanned vending apparatus, the unmanned vending apparatus comprising:
a plurality of cameras capable of photographing objects in the unmanned vending apparatus;
the image acquisition module is used for acquiring a plurality of images of the articles in the unmanned vending device, wherein the images correspond to the first image and the second image before and after the articles change;
image processing module for performing the image difference comparison method according to any one of claims 1 to 4.
9. The vending apparatus of claim 8, wherein the vending apparatus further comprises:
the weight checking module is used for solving the difference value between the weight of the object in the first image and the weight of the object in the second image, judging whether the difference value is within the preset weight range of the object of the type, and if so, outputting the object type of the key attention area; if not, difference comparison is carried out.
CN201811570474.4A 2018-12-19 2018-12-19 Image difference comparison method and system and unmanned vending device Active CN109740459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811570474.4A CN109740459B (en) 2018-12-19 2018-12-19 Image difference comparison method and system and unmanned vending device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811570474.4A CN109740459B (en) 2018-12-19 2018-12-19 Image difference comparison method and system and unmanned vending device

Publications (2)

Publication Number Publication Date
CN109740459A CN109740459A (en) 2019-05-10
CN109740459B true CN109740459B (en) 2021-04-16

Family

ID=66360881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811570474.4A Active CN109740459B (en) 2018-12-19 2018-12-19 Image difference comparison method and system and unmanned vending device

Country Status (1)

Country Link
CN (1) CN109740459B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706204B (en) * 2019-09-07 2022-05-17 创新奇智(合肥)科技有限公司 Commodity invalid detection and judgment scheme based on container door position
CN110991458B (en) * 2019-11-25 2023-05-23 创新奇智(北京)科技有限公司 Image feature-based artificial intelligent recognition result sampling system and sampling method
CN111126264A (en) * 2019-12-24 2020-05-08 北京每日优鲜电子商务有限公司 Image processing method, device, equipment and storage medium
CN111144871B (en) * 2019-12-25 2022-10-14 创新奇智(合肥)科技有限公司 Method for correcting image recognition result based on weight information
CN111210842B (en) * 2019-12-27 2023-04-28 中移(杭州)信息技术有限公司 Voice quality inspection method, device, terminal and computer readable storage medium
CN111369317B (en) * 2020-02-27 2023-08-18 创新奇智(上海)科技有限公司 Order generation method, order generation device, electronic equipment and storage medium
CN113515971A (en) * 2020-04-09 2021-10-19 阿里巴巴集团控股有限公司 Data processing method and system, network system and training method and device thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106557767A (en) * 2016-11-15 2017-04-05 北京唯迈医疗设备有限公司 A kind of method of ROI region in determination interventional imaging
CN108171257A (en) * 2017-12-01 2018-06-15 百度在线网络技术(北京)有限公司 The training of fine granularity image identification model and recognition methods, device and storage medium
CN108416902A (en) * 2018-02-28 2018-08-17 成都果小美网络科技有限公司 Real-time object identification method based on difference identification and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7110590B2 (en) * 1996-07-12 2006-09-19 Tomra Systems Asa Method and return vending machine device for handling empty beverage containers
CN107134053B (en) * 2017-04-19 2019-08-06 石道松 Intelligence is sold goods shops

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106557767A (en) * 2016-11-15 2017-04-05 北京唯迈医疗设备有限公司 A kind of method of ROI region in determination interventional imaging
CN108171257A (en) * 2017-12-01 2018-06-15 百度在线网络技术(北京)有限公司 The training of fine granularity image identification model and recognition methods, device and storage medium
CN108416902A (en) * 2018-02-28 2018-08-17 成都果小美网络科技有限公司 Real-time object identification method based on difference identification and device

Also Published As

Publication number Publication date
CN109740459A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
CN109740459B (en) Image difference comparison method and system and unmanned vending device
AU2014346263B2 (en) Complex background-oriented optical character recognition method and device
US20220174089A1 (en) Automatic identification and classification of adversarial attacks
CN107871117A (en) Apparatus and method for detection object
CN109684932B (en) Binocular vision-based tray pose recognition method
US11222224B2 (en) Automatic image synthesizing apparatus and method
CN108171247B (en) Vehicle re-identification method and system
CN107220664B (en) Oil bottle boxing and counting method based on structured random forest
CN103186904A (en) Method and device for extracting picture contours
CN110390677B (en) Defect positioning method and system based on sliding self-matching
CN112200045A (en) Remote sensing image target detection model establishing method based on context enhancement and application
US8989481B2 (en) Stereo matching device and method for determining concave block and convex block
CN111597933B (en) Face recognition method and device
CN110619336B (en) Goods identification algorithm based on image processing
CN109858438B (en) Lane line detection method based on model fitting
US9129156B2 (en) Method for detecting and recognizing an object in an image, and an apparatus and a computer program therefor
CN110119742B (en) Container number identification method and device and mobile terminal
US9189841B2 (en) Method of checking the appearance of the surface of a tyre
CN111213154A (en) Lane line detection method, lane line detection equipment, mobile platform and storage medium
CN114331946A (en) Image data processing method, device and medium
CN111444847A (en) Traffic sign detection and identification method, system, device and storage medium
CN111738310B (en) Material classification method, device, electronic equipment and storage medium
WO2020261700A1 (en) Information processing method and information processing system
CN115861259A (en) Lead frame surface defect detection method and device based on template matching
Mishne et al. Multi-channel wafer defect detection using diffusion maps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant