CN112001289A - Article detection method and apparatus, storage medium, and electronic apparatus - Google Patents

Article detection method and apparatus, storage medium, and electronic apparatus Download PDF

Info

Publication number
CN112001289A
CN112001289A CN202010828027.5A CN202010828027A CN112001289A CN 112001289 A CN112001289 A CN 112001289A CN 202010828027 A CN202010828027 A CN 202010828027A CN 112001289 A CN112001289 A CN 112001289A
Authority
CN
China
Prior art keywords
image
sub
similarity
target
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010828027.5A
Other languages
Chinese (zh)
Inventor
刘彦甲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haier Uplus Intelligent Technology Beijing Co Ltd
Original Assignee
Haier Uplus Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haier Uplus Intelligent Technology Beijing Co Ltd filed Critical Haier Uplus Intelligent Technology Beijing Co Ltd
Priority to CN202010828027.5A priority Critical patent/CN112001289A/en
Publication of CN112001289A publication Critical patent/CN112001289A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for detecting an article, a storage medium and an electronic device, wherein the method comprises the following steps: determining a set of structural similarities between the first image and the second image, each structural similarity in the set of structural similarities being a structural similarity between a sub-image in the first image and a corresponding sub-image in the second image; determining a set of target similarities between the first image and the second image; determining the similarity between each sub-image in the first image and the corresponding sub-image in the second image according to the set of structural similarity and the set of target similarity; and determining a target image area in the second image according to the similarity between each sub-image in the first image and the corresponding sub-image in the second image, wherein the target image area is an area of the second image which is changed relative to the first image. The invention solves the technical problem that the detection of the object in the target device in the related art is long in time consumption.

Description

Article detection method and apparatus, storage medium, and electronic apparatus
Technical Field
The present invention relates to the field of communications, and in particular, to a method and an apparatus for detecting an article, a storage medium, and an electronic apparatus.
Background
In the related art, with respect to a target device (e.g., a refrigerator) for storing articles, a user often performs a putting in and taking out operation on the articles placed in the target device, and often forgets what articles (e.g., food materials) are put in or taken out due to a long time interval or other problems from performing the putting in or taking out operation, and thus the user desires to conveniently know what articles are newly put in or taken out, i.e., the articles in which there is a variation in the target device. In the related art, a deep learning algorithm is adopted to detect changed articles in a target device, a large number of samples need to be collected and trained according to the samples, and the large number of samples need to be labeled, so that the time required for Processing is long, the workload is large and the Processing is complex, a server with a special Graphics Processing Unit (GPU) algorithm needs to be adopted and the calculation is performed at the cloud end, and therefore the overall detection efficiency of the algorithm is low, the required time is long, the Processing workload is large and the cost is high.
Aiming at the technical problem that the detection of the object in the target device is long in time consumption in the related art, an effective technical scheme is not provided yet.
Disclosure of Invention
The embodiment of the invention provides a method and a device for detecting an article, a storage medium and an electronic device, which are used for at least solving the technical problem that the time consumption is long when the article in a target device is detected in the related art.
According to an embodiment of the present invention, there is provided a method of detecting an article, including: determining a group of structural similarities between a first image and a second image, wherein each structural similarity in the group of structural similarities is a structural similarity between one sub-image in the first image and a corresponding sub-image in the second image, the first image is an image obtained by shooting a storage space in a target device at a first moment, the second image is an image obtained by shooting the storage space at a second moment, and the first image and the second sub-image are divided into a plurality of sub-images; determining a set of target similarities between the first image and the second image, wherein each target similarity in the set of target similarities is a similarity between the saliency map of the one sub-image in the first image and the saliency map of the corresponding one sub-image in the second image; determining the similarity between each sub-image in the first image and the corresponding sub-image in the second image according to the set of structural similarities and the set of target similarities; and determining a target image area in the second image according to the similarity between each sub-image in the first image and the corresponding sub-image in the second image, wherein the target image area is an area where the second image changes relative to the first image, and the target image area is used for indicating that the placed article in the target image area is changed.
Optionally, the determining a set of structural similarities between the first image and the second image comprises: dividing the first image and the second image into m × n sub-images respectively to obtain a sub-image S of the first imageijAnd sub-image S 'in the second image'ijWherein m and n are positive integers, i is more than or equal to 1 and less than or equal to m, and j is more than or equal to 1 and less than or equal to n; determining a sub-image S in the first imageijAnd sub-image S 'in the second image'ijStructural similarity of (D) FijAnd obtaining the group of structural similarity.
Optionally, the determining a set of target similarities between the first image and the second image comprises: determining a sub-image S in the first imageijSignificant graph K ofijAnd sub-image S 'in the second image'ijIs significant picture K'ij(ii) a Determining the saliency map KijAnd the significant map K'ijSimilarity between them DijAnd obtaining the group of target similarity.
Optionally, the determining a sub-image S in the first imageijSignificant graph K ofijAnd sub-image S 'in the second image'ijIs significant picture K'ijThe method comprises the following steps: for sub-image S in the first imageiPerforming orthogonal wavelet transform to obtain the sub-image SijOf the low-frequency component ILLAnd a high frequency component IHL、IHH、ILH(ii) a Determining the low frequency component ILLWith said low-frequency component ILLThe difference c between the average values of (a); based on the high-frequency component IHL、IHH、ILHAnd the difference value c is subjected to inverse wavelet transform to obtain a sub-image S in the first imageijSignificant graph K ofij(ii) a For sub-image S 'in the second image'ijPerforming orthogonal wavelet transformation to obtain the sub-image S'ijOf low frequency component I'LLAnd a high-frequency component I'HL、I′HH、I′LH(ii) a Determining the low frequency component I'LLAnd the low frequency component I'LLThe difference c' between the average values of (a); based on the high-frequency component I'HL、I′HH、I′LHAnd performing inverse wavelet transformation on the difference value c ' to obtain a sub-image S ' in the second image 'ijIs significant picture K'ij
Optionally, the determining the similarity between each sub-image in the first image and the corresponding sub-image in the second image according to the set of structural similarities and the set of target similarities includes: according to the structural similarity F in the group of structural similaritiesijAnd a target similarity D of the set of target similaritiesijDetermining a sub-image S in the first imageijAnd corresponding sub-image S 'in the second image'ijSimilarity between RijThe first image and the second image are divided into m multiplied by n sub-images, wherein m and n are positive integers, i is larger than or equal to 1 and smaller than or equal to m, and j is larger than or equal to 1 and smaller than or equal to n.
Optionally, the structural similarity F according to the group of structural similaritiesijAnd a target similarity D of the set of target similaritiesijDetermining a sub-image S in the first imageijAnd corresponding sub-image S 'in the second image'ijSimilarity between RijThe method comprises the following steps: determining the similarity R according to the following formulaij:Rij=w1×Fij+w2×DijWherein, the w1,w2Respectively the structural similarity FijThe target similarity DijThe corresponding weight.
Optionally, the determining a target image area in the second image according to the similarity between each sub-image in the first image and the corresponding sub-image in the second image includes: at the similarity RijIn the case of being smaller than the similarity threshold, the similarity R is determinedijSub-image S 'in the corresponding second image'ijAdding a tag value; all the sub-images S 'to which the flag value is added'ijAnd determining a connected region formed in the second image as the target image region.
According to another embodiment of the present invention, there is provided an article detecting apparatus including: a first determining module, configured to determine a group of structural similarities between a first image and a second image, where each structural similarity in the group of structural similarities is a structural similarity between one sub-image in the first image and a corresponding sub-image in the second image, the first image is an image obtained by shooting a storage space in a target device at a first time, the second image is an image obtained by shooting the storage space at a second time, and the first image and the second sub-image are both divided into a plurality of sub-images; a second determining module, configured to determine a set of target similarities between the first image and the second image, where each target similarity in the set of target similarities is a similarity between the saliency map of the one sub-image in the first image and the saliency map of the corresponding one sub-image in the second image; a third determining module, configured to determine, according to the set of structural similarities and the set of target similarities, a similarity between each sub-image in the first image and a corresponding sub-image in the second image; a fourth determining module, configured to determine a target image area in the second image according to a similarity between each sub-image in the first image and a corresponding sub-image in the second image, where the target image area is an area where the second image changes with respect to the first image, and the target image area is used to indicate that a placed article in the target image area has changed.
Optionally, the first determining module is further configured to: dividing the first image and the second image into m × n sub-images respectively to obtain a sub-image S of the first imageijAnd sub-image S 'in the second image'ijWherein m and n are positive integers, i is more than or equal to 1 and less than or equal to m, and j is more than or equal to 1 and less than or equal to n; determining a sub-image S in the first imageijAnd sub-image S 'in the second image'ijStructural similarity of (D) FijAnd obtaining the group of structural similarity.
Optionally, the second determining module is further configured to: determining a sub-image S in the first imageijSignificant graph K ofijAnd sub-image S 'in the second image'ijIs significant picture K'ij(ii) a Determining the saliency map KijAnd the significant map K'ijSimilarity between them DijAnd obtaining the group of target similarity.
Optionally, the second determining module is further configured to: for sub-image S in the first imageijPerforming orthogonal wavelet transform to obtain the sub-image SijOf the low-frequency component ILLAnd a high frequency component IHL、IHH、ILH(ii) a Determining the low frequency component ILLWith said low-frequency component ILLThe difference c between the average values of (a); based on the high-frequency component IHL、IHH、ILHAnd the difference value c is subjected to inverse wavelet transform to obtain a sub-image S in the first imageijSignificant graph K ofij(ii) a For sub-image S 'in the second image'ijPerforming orthogonal wavelet transformation to obtain the sub-image S'ijOf low frequency component I'LLAnd a high-frequency component I'HL、I′HH、I′LH(ii) a Determining the low frequency component I'LLAnd the low frequency component I'LLOfThe difference between the mean values c'; based on the high-frequency component I'HL、I′HH、I′LHAnd performing inverse wavelet transformation on the difference value c ' to obtain a sub-image S ' in the second image 'ijIs significant picture K'ij
Optionally, the third determining module is further configured to: according to the structural similarity F in the group of structural similaritiesijAnd a target similarity D of the set of target similaritiesijDetermining a sub-image S in the first imageijAnd corresponding sub-image S 'in the second image'ijSimilarity between RijThe first image and the second image are divided into m multiplied by n sub-images, wherein m and n are positive integers, i is larger than or equal to 1 and smaller than or equal to m, and j is larger than or equal to 1 and smaller than or equal to n.
Optionally, the third determining module is further configured to: determining the similarity R according to the following formulaij:Rij=w1×Fij+w2×DijWherein, the w1,w2Respectively the structural similarity FijThe target similarity DijThe corresponding weight.
Optionally, the fourth determining module is further configured to: at the similarity RijIn the case of being smaller than the similarity threshold, the similarity R is determinedijSub-image S 'in the corresponding second image'ijAdding a tag value; all the sub-images S 'to which the flag value is added'ijAnd determining a connected region formed in the second image as the target image region.
Alternatively, according to another embodiment of the present invention, a storage medium is provided, in which a computer program is stored, wherein the computer program is arranged to perform the above-mentioned method when executed.
Alternatively, according to another embodiment of the present invention, there is provided an electronic apparatus, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the above method.
According to the invention, a group of structural similarity between a first image and a second image is determined, wherein each structural similarity in the group of structural similarity is the structural similarity between a sub-image in the first image and a corresponding sub-image in the second image, the first image is an image obtained by shooting a storage space in a target device at a first moment, the second image is an image obtained by shooting the storage space at a second moment, and the first image and the second sub-image are both divided into a plurality of sub-images; determining a set of target similarities between the first image and the second image, wherein each target similarity in the set of target similarities is a similarity between the saliency map of the one sub-image in the first image and the saliency map of the corresponding one sub-image in the second image; determining the similarity between each sub-image in the first image and the corresponding sub-image in the second image according to the set of structural similarities and the set of target similarities; and determining a target image area in the second image according to the similarity between each sub-image in the first image and the corresponding sub-image in the second image, wherein the target image area is an area where the second image changes relative to the first image, and the target image area is used for indicating that the placed article in the target image area is changed. Because a large amount of training data do not need to be labeled, a deep learning algorithm is trained by relying on a large amount of training, and the region with article change is detected according to the processing steps performed on the first image and the second image, the technical problem that the time consumption is long when articles in the target device are detected in the related technology can be solved, the time required for detecting the articles in the target device is shortened, and the efficiency of detecting the articles in the target device is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method of detecting an item according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a sub-image in a first image according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a sub-image in a second image according to an embodiment of the invention;
FIG. 4 is a schematic diagram of labeling a sub-image in a second image according to an embodiment of the invention;
FIG. 5 is a schematic illustration of all marked sub-images in a second image according to an embodiment of the invention;
FIG. 6 is a schematic diagram of a target image area according to an embodiment of the invention;
FIG. 7 is a flow chart of a method of detecting an item according to another embodiment of the present invention;
FIG. 8 is a schematic diagram of a detection signature according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a target image area according to another embodiment of the present invention;
FIG. 10 is a block diagram of an apparatus for detecting an article according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
An embodiment of the present invention provides a method for detecting an article, and fig. 1 is a flowchart of the method for detecting an article according to the embodiment of the present invention, as shown in fig. 1, including:
step S102, determining a group of structural similarity between a first image and a second image, wherein each structural similarity in the group of structural similarity is the structural similarity between a sub-image in the first image and a corresponding sub-image in the second image, the first image is an image obtained by shooting a storage space in a target device at a first moment, the second image is an image obtained by shooting the storage space at a second moment, and the first image and the second sub-image are divided into a plurality of sub-images;
step S104, determining a group of target similarities between the first image and the second image, wherein each target similarity in the group of target similarities is a similarity between the saliency map of the one sub-image in the first image and the saliency map of the corresponding one sub-image in the second image;
step S106, determining the similarity between each sub-image in the first image and the corresponding sub-image in the second image according to the group of structural similarities and the group of target similarities;
step S108, determining a target image area in the second image according to the similarity between each sub-image in the first image and a corresponding sub-image in the second image, where the target image area is an area where the second image changes relative to the first image, and the target image area is used to indicate that the article placed in the target image area has changed.
According to the invention, a group of structural similarity between a first image and a second image is determined, wherein each structural similarity in the group of structural similarity is the structural similarity between a sub-image in the first image and a corresponding sub-image in the second image, the first image is an image obtained by shooting a storage space in a target device at a first moment, the second image is an image obtained by shooting the storage space at a second moment, and the first image and the second sub-image are both divided into a plurality of sub-images; determining a set of target similarities between the first image and the second image, wherein each target similarity in the set of target similarities is a similarity between the saliency map of the one sub-image in the first image and the saliency map of the corresponding one sub-image in the second image; determining the similarity between each sub-image in the first image and the corresponding sub-image in the second image according to the set of structural similarities and the set of target similarities; and determining a target image area in the second image according to the similarity between each sub-image in the first image and the corresponding sub-image in the second image, wherein the target image area is an area where the second image changes relative to the first image, and the target image area is used for indicating that the placed article in the target image area is changed. Because a large amount of training data do not need to be labeled, a deep learning algorithm is trained by relying on a large amount of training, and the region with article change is detected according to the processing steps performed on the first image and the second image, the technical problem that the time consumption is long when articles in the target device are detected in the related technology can be solved, the time required for detecting the articles in the target device is shortened, and the efficiency of detecting the articles in the target device is improved.
Based on the above embodiment, the determining a set of structural similarities between the first image and the second image includes: dividing the first image and the second image into m × n sub-images respectively to obtain a sub-image S of the first imageijAnd sub-image S 'in the second image'ijWherein m and n are positive integers, i is more than or equal to 1 and less than or equal to m, and j is more than or equal to 1 and less than or equal to n; determining a sub-image S in the first imageijAnd sub-image S 'in the second image'ijStructural similarity of (D) FijAnd obtaining the group of structural similarity.
In the above embodiment, the sub-image in the first image may be determined in the following manneriWith a child in the second imageImage S'ijStructural similarity of (D) Fij
Determining a sub-image S in the first imageijAnd sub-image S 'in the second image'ijStructural similarity between the features
Figure BDA0002636942780000091
The structured similar characteristics TijAs the structural similarity FijWherein, mu1Representing sub-images SijMean value of middle pixel values, μ2Represents sub-image S'ijMean value of middle pixel values, σ1Representing sub-images SijVariance of middle pixel value, σ2Represents sub-image S'ijVariance of middle pixel value, σ12Representing sub-images SijMiddle pixel value and sub-image S'ijThe covariance of the middle pixel values. Alternatively, c1=(p1L)2,c2=(p2L)2,p1=0.01,p2L denotes a value of a camera-supported pixel value range (i.e., a pixel value difference between a maximum pixel value and a minimum pixel value) built in the target device for capturing the first image and the second image. In another alternative embodiment, c1、c2May be a preset value.
As shown in fig. 2 and 3, each of the first image and the second image is divided into m × n sub-images, where m is 3 and n is 3 in fig. 2 and 3, and two sub-images at the same position in the first image and the second image are sub-images corresponding to each other, for example, the sub-image S in the first image11And sub-image S 'in the second image'11Are corresponding sub-images.
Wherein the determining a set of target similarities between the first image and the second image comprises: determining a sub-image S in the first imageijSignificant graph K ofijAnd sub-image S 'in the second image'ijIs significant picture K'ij(ii) a Determining the saliency map KijAnd the significant map K'ijSimilarity between them DijAnd obtaining the group of target similarity.
The saliency map is a map for indicating the saliency of an image, such as the sub-image SijSignificant graph K ofijFor representing sub-images SijIs a saliency of sub-image S'ijIs significant picture K'ijFor representing sub-image S'ijThe significance of (a).
As an alternative embodiment, the saliency map KijAnd the significant map K'ijSimilarity between them DijIs the significant map KijAnd the significant map K'ijCosine similarity between them.
Based on the above embodiment, the determining the sub-image S in the first imageijSignificant graph K ofijAnd sub-image S 'in the second image'ijIs significant picture K'ijThe method comprises the following steps: for sub-image S in the first imageijPerforming orthogonal wavelet transform to obtain the sub-image SijOf the low-frequency component ILLAnd a high frequency component IHL、IHH、ILH(ii) a Determining the low frequency component ILLWith said low-frequency component ILLThe difference c between the average values of (a); based on the high-frequency component IHL、IHH、ILHAnd the difference value c is subjected to inverse wavelet transform to obtain a sub-image S in the first imageijSignificant graph K ofij(ii) a For sub-image S 'in the second image'ijPerforming orthogonal wavelet transformation to obtain the sub-image S'ijOf low frequency component I'LLAnd a high-frequency component I'HL、I′HH、I′LH(ii) a Determining the low frequency component I'LLAnd the low frequency component I'LLThe difference c' between the average values of (a); based on the high-frequency component I'HL、I′HH、I′LHAnd performing inverse wavelet transformation on the difference value c ' to obtain a sub-image S ' in the second image 'ijIs significant picture K'ij
Wherein the high frequency component IHL、IHH、ILHRespectively represent sub-images SijHorizontal component, diagonal component, vertical component, high frequency component I'HL、I′HH、I′LHRespectively represent sub-image S'ijHorizontal square component, diagonal component, vertical component.
Note that, the low frequency component ILLIs represented by the form ILL(x,y)Wherein (x, y) represents the low frequency component ILLCorresponding subimage SijPixel point in subimage SijCoordinate of (1), i.e. x ∈ [ x ]0,xe],y∈[y0,ye]Wherein x is0,xe,,y0,yeRespectively represent sub-images SijThe minimum value of the abscissa, the maximum value of the abscissa, the minimum value of the ordinate and the maximum value of the ordinate of the middle pixel point; low frequency component I'LL=I′LL(x′,y′)Where x 'is ∈ [ x'0,x′e],y′∈[y′0,y′e],x′0,x′e,y′0,y′eRespectively represent sub-image S'ijThe minimum value of the abscissa, the maximum value of the abscissa, the minimum value of the ordinate and the maximum value of the ordinate of the middle pixel point.
In the above embodiment, the determining the similarity between each sub-image in the first image and the corresponding sub-image in the second image according to the set of structural similarities and the set of target similarities includes: according to the structural similarity F in the group of structural similaritiesijAnd a target similarity D of the set of target similaritiesijDetermining a sub-image S in the first imageijAnd corresponding sub-image S 'in the second image'ijSimilarity between RijThe first image and the second image are divided into m multiplied by n sub-images, wherein m and n are positive integers, i is larger than or equal to 1 and smaller than or equal to m, and j is larger than or equal to 1 and smaller than or equal to n.
Based on the embodiment, the similarity between the sub-image of the first image and the sub-image of the second image is determined by combining the structural similarity between the sub-image of the first image and the sub-image of the second image and the target similarity, so that the accuracy of the similarity between the sub-image of the first image and the sub-image of the second image is improved, therefore, the target image area in the second image is determined by adopting the similarity between the sub-images determined based on the method, and the accuracy of the determined target image area is improved.
The structural similarity F according to the group of structural similaritiesijAnd a target similarity D of the set of target similaritiesijDetermining a sub-image S in the first imageijAnd corresponding sub-image S 'in the second image'ijSimilarity between RijThe method comprises the following steps: determining the similarity R according to the following formulaij:Rij=w1×Fij+w2×DijWherein, the w1,w2Respectively the structural similarity FijThe target similarity DijThe corresponding weight. Alternatively, w1,w2Are all 0.5.
Determining a target image area in the second image according to the similarity between each sub-image in the first image and the corresponding sub-image in the second image, including: at the similarity RijIn the case of being smaller than the similarity threshold, the similarity R is determinedijSub-image S 'in the corresponding second image'ijAdding a tag value; all the sub-images S 'to which the flag value is added'ijAnd determining a connected region formed in the second image as the target image region.
As shown in fig. 4, the sub-image S is determined11And sub-image S'11Similarity between R11If the similarity is less than the similarity threshold value, the sub-image S 'in the second image is processed'11Adding the mark comprises: for sub-image S 'in the second image'11Adding a logo (e.g. "N" or "255"), or to sub-image S 'in the second image'11Adds an identifier to each pixel value in the second image, and all sub-images of the second image to which the tag value is added are shown in fig. 5; in FIG. 6, the interpolation in the second image is shownThe connected region formed by all the sub-images added with the mark value is the grid region in fig. 6 (i.e., the target image region in the above embodiment).
In the above embodiment, after determining the target image area in the second image, the above embodiment is further configured to perform the following technical solutions: and displaying a reminding message on a display screen of the target device, wherein the reminding message is used for reminding that the article placed in the target image area is changed.
The following explains the method for detecting an article in the above embodiment by taking the target device as a refrigerator and taking an article placed in a storage space in the target device as a food material, but the method is not limited to the technical solution of the embodiment of the present invention. As shown in fig. 7, the scheme of the embodiment of the present invention includes the following steps:
step 1, shooting an image I before food is put in a refrigerator and an image I' after the food is put in the refrigerator (i.e. corresponding to the first image and the second image in the above embodiment respectively), and respectively performing block processing on the two images to divide the two images into m × n blocks, wherein an image block in the image I (i.e. a sub-image in the first image in the above embodiment) is SijThe image block in picture I '(i.e. the sub-image in the second image in the above embodiment) is S'ijWherein m and n are positive integers, i is more than or equal to 1 and less than or equal to m, and j is more than or equal to 1 and less than or equal to n;
step 2, calculating each image Sij、S′ijThe similarity between them.
Wherein, the step 2 comprises the following steps:
step 2.1, calculate each block of image Sij、S′ijStructural similarity feature F betweenij(i.e., structural similarity in the above embodiments), wherein the similar feature F is structuredijThe calculation formula of (2) is as follows:
Figure BDA0002636942780000121
μ1representing sub-images SijMean value of middle pixel values, μ2Represents sub-image S'ijMean value of middle pixel values, σ1Representing sub-images SijVariance of middle pixel value, σ2Represents sub-image S'ijVariance of middle pixel value, σ12Representing sub-images SijMiddle pixel value and sub-image S'ijThe covariance of the middle pixel values. c. C1,c2Is a constant for maintaining stability. Alternatively, c1=(p1L)2,c2=(p2L)2,p1=0.01,p2L denotes a value of a camera-supported pixel value range (i.e., a pixel value difference between a maximum pixel value and a minimum pixel value) for capturing the first image and the second image; fijThe value of (A) is in the range of 0 to 1, F when two sub-images are identicalijHas a value of 1.
Step 2.2, calculate each block of image SiContrast significant image K of jij(i.e., saliency map in the above embodiment), and each block image S'ijContrast significant image K'ij(i.e., saliency map of the above example), where KijThe calculation process of (2) includes:
step 2.2.1, for the image SijPerforming orthogonal wavelet transform to obtain four wavelet components of the image, i.e. a low-frequency component ILL(x,y)(i.e., the low-frequency component I in the above-described embodimentLL) And three high frequency components IHL(x,y)、IHH(x,y)、ILH(x,y)(i.e., the high-frequency component I in the above-described embodimentHL、IHH、ILH) Where (x, y) in each component represents the coordinates of a pixel point in the sub-image to which the component corresponds, e.g. ILL(x,y)X and y in (1) represent an image SijPixel point in image SijX ∈ [ x ]0,xe],y∈[y0,ye]Wherein x is0,xe,,y0,yeRespectively represent sub-images SijThe minimum value of the abscissa, the maximum value of the abscissa, the minimum value of the ordinate and the maximum value of the ordinate of the middle pixel point;
step 2.2.2, calculating the Low frequency component ILL(x,y)Difference from its average value:
C(x,y)=(ILL(x,y)-Iμ)2in which IμIs a low-frequency component ILL(x,y)Average value of (d);
step 2.2.3 based on said C (x, y), IHL(x,y)、IHH(x,y)、ILH(x,y)Inverse wavelet transform is carried out to obtain contrast significant image Kij
According to the calculation process, calculating each small block image S 'similarly'ijContrast significant image K'ij
Step 2.3, calculating and comparing the saliency map KijAnd K'ijCosine similarity distance D ofij(i.e., the target similarity D in the above-described embodimentij);
Step 2.4, structural similarity FijSignificant similarity to contrast DijWeighting and summing to obtain comprehensive similarity RijThe weighting coefficients are 0.5 and 0.5 respectively, when R isijIf the similarity is greater than the similarity threshold value, taking S'ijEach pixel in the block is labeled 255, when RijWhen the similarity is less than or equal to the similarity threshold value, taking S'ijEach pixel in the block is labeled 0. The similarity threshold value can be a certain value selected from the range of 0 to 1 according to experience or actual scenes;
step 3, marking all image blocks, and marking the image block added with the mark 0 as S ″ijWill be S ″)ijCombining the blocks in the second image into a new image according to the sequence of the blocks in the second image, wherein the new image is the detection feature map;
alternatively, as shown in FIG. 8, S ″)ijThe new image is re-combined according to the sequence of the blocks in the second image to be the image shown in fig. 8;
and 4, merging connected domains of the detection feature maps, and marking out circumscribed rectangular frames of all the connected domains (the region represented by the circumscribed rectangular frame is the target image region in the embodiment), wherein the connected domains are the detected food material change positions, and the region enclosed by the solid line shown in fig. 9 is the detected food material change positions.
By the embodiment, the technical effect of reminding a user of the change of the food materials when the food materials are put into or taken out of the refrigerator can be achieved, the user can be reminded of the position of the newly-put food materials, the technical problems that a large amount of data needs to be collected by adopting a deep learning algorithm when the change of the position of the food materials is detected, and time and labor are consumed due to the fact that the data needs to be calibrated can be solved.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
According to another embodiment of the present invention, there is provided an article detection apparatus, which is used for implementing the above embodiments and preferred embodiments, and the description of the apparatus is omitted for brevity. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 10 is a block diagram of a structure of an apparatus for detecting an article according to an embodiment of the present invention, the apparatus including:
a first determining module 62, configured to determine a set of structural similarities between a first image and a second image, where each structural similarity in the set of structural similarities is a structural similarity between one sub-image in the first image and a corresponding sub-image in the second image, the first image is an image obtained by shooting a storage space in a target device at a first time, the second image is an image obtained by shooting the storage space at a second time, and the first image and the second sub-image are both divided into multiple sub-images;
a second determining module 64, configured to determine a set of target similarities between the first image and the second image, where each target similarity in the set of target similarities is a similarity between the saliency map of the one sub-image in the first image and the saliency map of the corresponding one sub-image in the second image;
a third determining module 66, configured to determine a similarity between each sub-image in the first image and a corresponding sub-image in the second image according to the set of structural similarities and the set of target similarities;
a fourth determining module 68, configured to determine a target image area in the second image according to a similarity between each sub-image in the first image and a corresponding sub-image in the second image, where the target image area is an area where the second image changes relative to the first image, and the target image area is used to indicate that the placed article in the target image area has changed.
According to the invention, a group of structural similarity between a first image and a second image is determined, wherein each structural similarity in the group of structural similarity is the structural similarity between a sub-image in the first image and a corresponding sub-image in the second image, the first image is an image obtained by shooting a storage space in a target device at a first moment, the second image is an image obtained by shooting the storage space at a second moment, and the first image and the second sub-image are both divided into a plurality of sub-images; determining a set of target similarities between the first image and the second image, wherein each target similarity in the set of target similarities is a similarity between the saliency map of the one sub-image in the first image and the saliency map of the corresponding one sub-image in the second image; determining the similarity between each sub-image in the first image and the corresponding sub-image in the second image according to the set of structural similarities and the set of target similarities; and determining a target image area in the second image according to the similarity between each sub-image in the first image and the corresponding sub-image in the second image, wherein the target image area is an area where the second image changes relative to the first image, and the target image area is used for indicating that the placed article in the target image area is changed. Because a large amount of training data do not need to be labeled, a deep learning algorithm is trained by relying on a large amount of training, and the region with article change is detected according to the processing steps performed on the first image and the second image, the technical problem that the time consumption is long when articles in the target device are detected in the related technology can be solved, the time required for detecting the articles in the target device is shortened, and the efficiency of detecting the articles in the target device is improved.
Optionally, the first determining module 62 is further configured to: dividing the first image and the second image into m × n sub-images respectively to obtain a sub-image S of the first imageijAnd sub-image S 'in the second image'ijWherein m and n are positive integers, i is more than or equal to 1 and less than or equal to m, and j is more than or equal to 1 and less than or equal to n; determining a sub-image S in the first imageijAnd sub-image S 'in the second image'ijStructural similarity of (D) FijAnd obtaining the group of structural similarity.
Optionally, the second determining module 64 is further configured to: determining a sub-image S in the first imageijSignificant graph K ofijAnd sub-image S 'in the second image'ijIs significant picture K'ij(ii) a Determining the saliency map KijAnd the significant map K'ijThe similarity betweenDegree DijAnd obtaining the group of target similarity.
Optionally, the second determining module 64 is further configured to: for sub-image S in the first imageijPerforming orthogonal wavelet transform to obtain the sub-image SijOf the low-frequency component ILLAnd a high frequency component IHL、IHH、ILH(ii) a Determining the low frequency component ILLWith said low-frequency component ILLThe difference c between the average values of (a); based on the high-frequency component IHL、IHH、ILHAnd the difference value c is subjected to inverse wavelet transform to obtain a sub-image S in the first imageijSignificant graph K ofij(ii) a For sub-image S 'in the second image'ijPerforming orthogonal wavelet transformation to obtain the sub-image S'ijOf low frequency component I'LLAnd a high-frequency component I'HL、I′HH、I′LH(ii) a Determining the low frequency component I'LLAnd the low frequency component I'LLThe difference c' between the average values of (a); based on the high-frequency component I'HL、I′HH、I′LHAnd performing inverse wavelet transformation on the difference value c ' to obtain a sub-image S ' in the second image 'ijIs significant picture K'ij
Optionally, the third determining module 66 is further configured to: according to the structural similarity F in the group of structural similaritiesijAnd a target similarity D of the set of target similaritiesijDetermining a sub-image S in the first imageijAnd corresponding sub-image S 'in the second image'ijSimilarity between RijThe first image and the second image are divided into m multiplied by n sub-images, wherein m and n are positive integers, i is larger than or equal to 1 and smaller than or equal to m, and j is larger than or equal to 1 and smaller than or equal to n.
Optionally, the third determining module 66 is further configured to: determining the similarity R according to the following formulaij:Rij=w1×Fij+w2×DijWherein, the w1,w2Respectively the structural similarity FijThe target phaseSimilarity DijThe corresponding weight.
Optionally, the fourth determining module 68 is further configured to: at the similarity RijIn the case of being smaller than the similarity threshold, the similarity R is determinedijSub-image S 'in the corresponding second image'ijAdding a tag value; all the sub-images S 'to which the flag value is added'ijAnd determining a connected region formed in the second image as the target image region.
An embodiment of the present invention further provides a storage medium including a stored program, wherein the program executes any one of the methods described above.
Alternatively, in the present embodiment, the storage medium may be configured to store program codes for performing the following steps:
s1, determining a set of structural similarities between a first image and a second image, where each structural similarity in the set of structural similarities is a structural similarity between a sub-image in the first image and a corresponding sub-image in the second image, the first image is an image obtained by shooting a storage space in a target device at a first time, the second image is an image obtained by shooting the storage space at a second time, and the first image and the second sub-image are both divided into a plurality of sub-images;
s2, determining a set of target similarities between the first image and the second image, wherein each target similarity in the set of target similarities is a similarity between the saliency map of the one sub-image in the first image and the saliency map of the corresponding one sub-image in the second image;
s3, determining the similarity between each sub-image in the first image and the corresponding sub-image in the second image according to the set of structural similarity and the set of target similarity;
and S4, determining a target image area in the second image according to the similarity between each sub-image in the first image and the corresponding sub-image in the second image, wherein the target image area is an area where the second image changes relative to the first image, and the target image area is used for indicating that the placed article in the target image area is changed.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, determining a set of structural similarities between a first image and a second image, where each structural similarity in the set of structural similarities is a structural similarity between a sub-image in the first image and a corresponding sub-image in the second image, the first image is an image obtained by shooting a storage space in a target device at a first time, the second image is an image obtained by shooting the storage space at a second time, and the first image and the second sub-image are both divided into a plurality of sub-images;
s2, determining a set of target similarities between the first image and the second image, wherein each target similarity in the set of target similarities is a similarity between the saliency map of the one sub-image in the first image and the saliency map of the corresponding one sub-image in the second image;
s3, determining the similarity between each sub-image in the first image and the corresponding sub-image in the second image according to the set of structural similarity and the set of target similarity;
and S4, determining a target image area in the second image according to the similarity between each sub-image in the first image and the corresponding sub-image in the second image, wherein the target image area is an area where the second image changes relative to the first image, and the target image area is used for indicating that the placed article in the target image area is changed.
Fig. 11 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention. Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 11 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 11 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 11, or have a different configuration than shown in FIG. 11.
The memory 1002 may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for detecting an article in the embodiment of the present invention, and the processor 1004 executes various functional applications and data processing by executing the software programs and modules stored in the memory 1002, that is, the method for detecting an article described above is implemented. The memory 1002 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1002 may further include memory located remotely from the processor 1004, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. As an example, the memory 1002 may include, but is not limited to, the first determination module 62, the second determination module 64, the third determination module 66, and the fourth determination module 68 of the apparatus for detection of the item. In addition, other module units in the device for detecting an article may also be included, but are not limited to these, and are not described in detail in this example.
Optionally, the transmission device 1006 is used for receiving or transmitting data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transport device 1006 includes a Network adapter (NIC) that can be connected to a router via a Network cable to communicate with the internet or a local area Network. In one example, the transmission device 1006 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In addition, the electronic device further includes: a display 1008 for displaying a screen; and a connection bus 1010 for connecting the respective module parts in the above-described electronic apparatus.
In other embodiments, the terminal or the server may be a node in a distributed system, wherein the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting a plurality of nodes through a network communication form. Nodes can form a Peer-To-Peer (P2P, Peer To Peer) network, and any type of computing device, such as a server, a terminal, and other electronic devices, can become a node in the blockchain system by joining the Peer-To-Peer network.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of detecting an article, comprising:
determining a group of structural similarities between a first image and a second image, wherein each structural similarity in the group of structural similarities is a structural similarity between one sub-image in the first image and a corresponding sub-image in the second image, the first image is an image obtained by shooting a storage space in a target device at a first moment, the second image is an image obtained by shooting the storage space at a second moment, and the first image and the second sub-image are divided into a plurality of sub-images;
determining a set of target similarities between the first image and the second image, wherein each target similarity in the set of target similarities is a similarity between the saliency map of the one sub-image in the first image and the saliency map of the corresponding one sub-image in the second image;
determining the similarity between each sub-image in the first image and the corresponding sub-image in the second image according to the set of structural similarities and the set of target similarities;
and determining a target image area in the second image according to the similarity between each sub-image in the first image and the corresponding sub-image in the second image, wherein the target image area is an area where the second image changes relative to the first image, and the target image area is used for indicating that the placed article in the target image area is changed.
2. The method of claim 1, wherein determining a set of structural similarities between the first image and the second image comprises:
dividing the first image and the second image into m × n sub-images respectively to obtain a sub-image S of the first imageijAnd sub-image S 'in the second image'ijWherein m and n are positive integers, i is more than or equal to 1 and less than or equal to m, and j is more than or equal to 1 and less than or equal to n;
determining a sub-image S in the first imageijAnd sub-image S 'in the second image'ijStructural similarity of (D) FijAnd obtaining the group of structural similarity.
3. The method of claim 2, wherein determining a set of target similarities between the first image and the second image comprises:
determining a sub-image S in the first imageijSignificant graph K ofijAnd sub-image S 'in the second image'ijIs significant picture K'ij
Determining the saliency map KijAnd the significant map K'ijSimilarity between them DijAnd obtaining the group of target similarity.
4. Method according to claim 3, wherein said determining of sub-image S in said first imageijIs shown inKijAnd sub-image S 'in the second image'ijIs significant picture K'ijThe method comprises the following steps:
for sub-image S in the first imageijPerforming orthogonal wavelet transform to obtain the sub-image SijOf the low-frequency component ILLAnd a high frequency component IHL、IHH、ILH
Determining the low frequency component ILLWith said low-frequency component ILLThe difference c between the average values of (a);
based on the high-frequency component IHL、IHH、ILHAnd the difference value c is subjected to inverse wavelet transform to obtain a sub-image S in the first imageijSignificant graph K ofij
For sub-image S 'in the second image'ijPerforming orthogonal wavelet transformation to obtain the sub-image S'ijOf low frequency component I'LLAnd a high-frequency component I'HL、I′HH、I′LH
Determining the low frequency component I'LLAnd the low frequency component I'LLThe difference c' between the average values of (a);
based on the high-frequency component I'HL、I′HH、I′LHAnd performing inverse wavelet transformation on the difference value c ' to obtain a sub-image S ' in the second image 'ijIs significant picture K'ij
5. The method of claim 1, wherein determining the similarity between each sub-image in the first image and the corresponding sub-image in the second image according to the set of structural similarities and the set of target similarities comprises:
according to the structural similarity F in the group of structural similaritiesijAnd a target similarity D of the set of target similaritiesijDetermining a sub-image S in the first imageijAnd corresponding sub-image S 'in the second image'ijSimilarity between RijWherein, theThe first image and the second image are divided into m multiplied by n sub-images, wherein m and n are positive integers, i is larger than or equal to 1 and smaller than or equal to m, and j is larger than or equal to 1 and smaller than or equal to n.
6. The method of claim 5, wherein the structural similarity F is determined from the set of structural similaritiesijAnd a target similarity D of the set of target similaritiesijDetermining a sub-image S in the first imageijAnd corresponding sub-image S 'in the second image'ijSimilarity between RijThe method comprises the following steps:
determining the similarity R according to the following formulaij
Rij=w1×Fij+w2×DijWherein, the w1,w2Respectively the structural similarity FijThe target similarity DijThe corresponding weight.
7. The method of claim 5, wherein determining the target image area in the second image according to the similarity between each sub-image in the first image and the corresponding sub-image in the second image comprises:
at the similarity RijIn the case of being smaller than the similarity threshold, the similarity R is determinedijSub-image S 'in the corresponding second image'ijAdding a tag value;
all the sub-images S 'to which the flag value is added'ijAnd determining a connected region formed in the second image as the target image region.
8. An apparatus for detecting an article, comprising:
a first determining module, configured to determine a group of structural similarities between a first image and a second image, where each structural similarity in the group of structural similarities is a structural similarity between one sub-image in the first image and a corresponding sub-image in the second image, the first image is an image obtained by shooting a storage space in a target device at a first time, the second image is an image obtained by shooting the storage space at a second time, and the first image and the second sub-image are both divided into a plurality of sub-images;
a second determining module, configured to determine a set of target similarities between the first image and the second image, where each target similarity in the set of target similarities is a similarity between the saliency map of the one sub-image in the first image and the saliency map of the corresponding one sub-image in the second image;
a third determining module, configured to determine, according to the set of structural similarities and the set of target similarities, a similarity between each sub-image in the first image and a corresponding sub-image in the second image;
a fourth determining module, configured to determine a target image area in the second image according to a similarity between each sub-image in the first image and a corresponding sub-image in the second image, where the target image area is an area where the second image changes with respect to the first image, and the target image area is used to indicate that a placed article in the target image area has changed.
9. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 7 when executed.
10. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 7 by means of the computer program.
CN202010828027.5A 2020-08-17 2020-08-17 Article detection method and apparatus, storage medium, and electronic apparatus Pending CN112001289A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010828027.5A CN112001289A (en) 2020-08-17 2020-08-17 Article detection method and apparatus, storage medium, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010828027.5A CN112001289A (en) 2020-08-17 2020-08-17 Article detection method and apparatus, storage medium, and electronic apparatus

Publications (1)

Publication Number Publication Date
CN112001289A true CN112001289A (en) 2020-11-27

Family

ID=73474063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010828027.5A Pending CN112001289A (en) 2020-08-17 2020-08-17 Article detection method and apparatus, storage medium, and electronic apparatus

Country Status (1)

Country Link
CN (1) CN112001289A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361509A (en) * 2021-08-11 2021-09-07 西安交通大学医学院第一附属医院 Image processing method for facial paralysis detection
CN113989696A (en) * 2021-09-18 2022-01-28 北京远度互联科技有限公司 Target tracking method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060212704A1 (en) * 2005-03-15 2006-09-21 Microsoft Corporation Forensic for fingerprint detection in multimedia
CN104574399A (en) * 2015-01-06 2015-04-29 天津大学 Image quality evaluation method based on multi-scale vision significance and gradient magnitude
CN106575366A (en) * 2014-07-04 2017-04-19 光实验室股份有限公司 Methods and apparatus relating to detection and/or indicating a dirty lens condition
CN107346409A (en) * 2016-05-05 2017-11-14 华为技术有限公司 Pedestrian recognition methods and device again
CN107657608A (en) * 2017-09-25 2018-02-02 北京小米移动软件有限公司 Picture quality determines method, apparatus and electronic equipment
CN108615042A (en) * 2016-12-09 2018-10-02 炬芯(珠海)科技有限公司 The method and apparatus and player of video format identification
CN109300096A (en) * 2018-08-07 2019-02-01 北京智脉识别科技有限公司 A kind of multi-focus image fusing method and device
CN109918291A (en) * 2019-01-17 2019-06-21 深圳壹账通智能科技有限公司 Software interface detection method, device, computer equipment and storage medium
CN110149553A (en) * 2019-05-10 2019-08-20 腾讯科技(深圳)有限公司 Treating method and apparatus, storage medium and the electronic device of image

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060212704A1 (en) * 2005-03-15 2006-09-21 Microsoft Corporation Forensic for fingerprint detection in multimedia
CN106575366A (en) * 2014-07-04 2017-04-19 光实验室股份有限公司 Methods and apparatus relating to detection and/or indicating a dirty lens condition
CN104574399A (en) * 2015-01-06 2015-04-29 天津大学 Image quality evaluation method based on multi-scale vision significance and gradient magnitude
CN107346409A (en) * 2016-05-05 2017-11-14 华为技术有限公司 Pedestrian recognition methods and device again
CN108615042A (en) * 2016-12-09 2018-10-02 炬芯(珠海)科技有限公司 The method and apparatus and player of video format identification
CN107657608A (en) * 2017-09-25 2018-02-02 北京小米移动软件有限公司 Picture quality determines method, apparatus and electronic equipment
CN109300096A (en) * 2018-08-07 2019-02-01 北京智脉识别科技有限公司 A kind of multi-focus image fusing method and device
CN109918291A (en) * 2019-01-17 2019-06-21 深圳壹账通智能科技有限公司 Software interface detection method, device, computer equipment and storage medium
CN110149553A (en) * 2019-05-10 2019-08-20 腾讯科技(深圳)有限公司 Treating method and apparatus, storage medium and the electronic device of image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘彦甲: "视觉注意模型及其在SAR图像中的应用研究", 《中国优秀硕士学位学位论文全文数据库信息科技辑》, no. 01, pages 136 - 1199 *
宋明煜;张靖;俞一彪;: "多向滤波结合小波逆变换的图像超分辨率重建", 现代信息科技, no. 15, pages 5 - 8 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361509A (en) * 2021-08-11 2021-09-07 西安交通大学医学院第一附属医院 Image processing method for facial paralysis detection
CN113989696A (en) * 2021-09-18 2022-01-28 北京远度互联科技有限公司 Target tracking method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108229591A (en) Neural network adaptive training method and apparatus, equipment, program and storage medium
CN112001289A (en) Article detection method and apparatus, storage medium, and electronic apparatus
CN111459269B (en) Augmented reality display method, system and computer readable storage medium
CN112434715B (en) Target identification method and device based on artificial intelligence and storage medium
CN108647264A (en) A kind of image automatic annotation method and device based on support vector machines
CN110599554A (en) Method and device for identifying face skin color, storage medium and electronic device
CN113112542A (en) Visual positioning method and device, electronic equipment and storage medium
CN112001430A (en) Refrigerator food material detection method and device, storage medium and electronic device
CN112115292A (en) Picture searching method and device, storage medium and electronic device
CN114299230A (en) Data generation method and device, electronic equipment and storage medium
CN107181767A (en) Information sharing method, system and server
CN108764206B (en) Target image identification method and system and computer equipment
CN113435515B (en) Picture identification method and device, storage medium and electronic equipment
CN116188805A (en) Image content analysis method and device for massive images and image information network
CN115550645A (en) Method and device for determining intra-frame prediction mode, storage medium and electronic equipment
CN113781541B (en) Three-dimensional image processing method and device based on neural network and electronic equipment
CN114612531A (en) Image processing method and device, electronic equipment and storage medium
CN112750146B (en) Target object tracking method and device, storage medium and electronic equipment
CN111127529B (en) Image registration method and device, storage medium and electronic device
CN111127310B (en) Image processing method and device, electronic equipment and storage medium
CN104424635A (en) Information processing method, system and equipment
CN114926666A (en) Image data processing method and device
CN114638846A (en) Pickup pose information determination method, pickup pose information determination device, pickup pose information determination equipment and computer readable medium
CN108062741B (en) Binocular image processing method, imaging device and electronic equipment
CN111768443A (en) Image processing method and device based on mobile camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination