CN112529959A - Article movement detection method and device - Google Patents

Article movement detection method and device Download PDF

Info

Publication number
CN112529959A
CN112529959A CN202011471372.4A CN202011471372A CN112529959A CN 112529959 A CN112529959 A CN 112529959A CN 202011471372 A CN202011471372 A CN 202011471372A CN 112529959 A CN112529959 A CN 112529959A
Authority
CN
China
Prior art keywords
image
item
occluded
article
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011471372.4A
Other languages
Chinese (zh)
Inventor
程鉴张
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aibee Technology Co Ltd
Original Assignee
Beijing Aibee Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aibee Technology Co Ltd filed Critical Beijing Aibee Technology Co Ltd
Priority to CN202011471372.4A priority Critical patent/CN112529959A/en
Publication of CN112529959A publication Critical patent/CN112529959A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an article movement detection method and device, which can identify a shielded target article; obtaining a first article image of the occluded target article in a first image, wherein the first image is an image acquired before the occluded target article is not occluded, and the first article image is located in a first area in the first image; obtaining a second object image of the shielded target object in a second image, wherein the second image is an image acquired after the shielded target object is not shielded any more, and the second object image is located in a first area in the second image; determining whether the occluded target item is moved during occlusion from the first item image and the second item image. According to the embodiment of the invention, the problem that the system load is large and the computer resources are wasted due to the fact that all the objects in the image are subjected to movement detection is solved by carrying out movement detection on the target objects which are shielded in the image.

Description

Article movement detection method and device
Technical Field
The invention relates to the technical field of computers, in particular to an article movement detection method and device.
Background
In many fields, article management is required, and one core function in article management is object tracking of articles. In practical situations, target tracking is generally performed only on moving articles, and therefore, it is necessary to detect whether an article is moving before target tracking is performed on the article.
However, in the prior art, all articles are continuously subjected to movement detection, so that considerable computer resources are occupied for the movement detection of the articles, and the movement detection of all articles is undoubtedly a waste of the computer resources because all articles are not worth target tracking.
Disclosure of Invention
In view of the above problems, the present invention provides an article movement detection method and apparatus that overcomes or at least partially solves the above problems, and the technical solution is as follows:
an article movement detection method comprising:
identifying an occluded target item;
obtaining a first item image of the occluded target item in a first image, wherein the first image is an image acquired before the occluded target item is not occluded, and the first item image is located in a first area in the first image;
obtaining a second item image of the occluded target item in a second image, wherein the second image is an image acquired after the occluded target item is no longer occluded, and the second item image is located in the first area in the second image;
determining from the first item image and the second item image whether the occluded target item is moved during occlusion.
Optionally, the identifying the occluded target item includes:
and determining the position relation between a first position mark of the target object in the acquired frame image and a second position mark of the person in the frame image, and determining that the target object in the frame image is blocked by the person when the position relation is a preset position relation.
Optionally, the first position mark and the second position mark are both circumscribed rectangle frames, and the preset position relationship is as follows: the first position marker and the second position marker intersect.
Optionally, the image used for identifying the blocked target object and the first image are located in the same image sequence, and the images in the image sequence are sequentially arranged according to the sequence of the acquisition time; when the blocked target object is identified from the continuous multi-frame images, the first image is a frame image which is previous to the first frame image in the continuous multi-frame images.
Optionally, the image used for identifying the blocked target object and the second image are located in the same image sequence, and the images in the image sequence are sequentially arranged according to the sequence of the acquisition time; when the blocked target item is identified from the continuous multi-frame images, the second image is a frame image which is next to the last frame image in the continuous multi-frame images.
Optionally, the determining whether the occluded target item is moved during occlusion according to the first item image and the second item image includes:
obtaining a difference image of the first item image and the second item image, and obtaining a first detection result according to the difference image, wherein the first detection result is used for indicating whether the occluded target item is moved or not during occlusion;
and/or inputting the first item image and the second item image into a preset movement prediction model to obtain a second detection result, wherein the second detection result is used for indicating whether the occluded target item is moved or not during occlusion;
and/or determining a first moving direction of the occluded target item in a preset coordinate system according to the first item image and the second item image, determining moving directions of a plurality of other items existing in the first image and the second image in the preset coordinate system, and obtaining a third detection result according to the first moving direction and the moving directions of the plurality of other items, wherein the third detection result is used for indicating whether the occluded target item is moved during occlusion.
Optionally, the determining, according to the first item image and the second item image, whether the occluded target item is moved during occlusion further includes:
when the first detection result indicates that the occluded target item is moved during occlusion, inputting the first item image and the second item image to a preset movement prediction model to obtain a second detection result, wherein the second detection result is used for indicating whether the occluded target item is moved during occlusion.
Optionally, the determining, according to the first item image and the second item image, whether the occluded target item is moved during occlusion further includes:
when the second detection result indicates that the occluded target item is moved during occlusion, determining a first moving direction of the occluded target item in the preset coordinate system according to the first item image and the second item image, determining moving directions of a plurality of other items in the preset coordinate system, wherein the plurality of other items are both present in the first image and the second image, and obtaining a third detection result according to the first moving direction and the moving directions of the plurality of other items, wherein the third detection result is used for indicating whether the occluded target item is moved during occlusion.
An article movement detection device comprising: a target article identification unit, a first article image obtaining unit, a second article image obtaining unit and an article movement determining unit,
the target object identification unit is used for identifying the shielded target object;
the first item image obtaining unit is used for obtaining a first item image of the occluded target item in a first image, wherein the first image is an image acquired before the occluded target item is not occluded, and the first item image is located in a first area in the first image;
the second object image obtaining unit is configured to obtain a second object image of the occluded target object in a second image, where the second image is an image acquired after the occluded target object is no longer occluded, and the second object image is located in the first area in the second image;
the article movement determining unit is used for determining whether the occluded target article is moved during occlusion according to the first article image and the second article image.
Optionally, the target object identifying unit is specifically configured to determine a positional relationship between a first position mark of the target object in the acquired one frame of image and a second position mark of a person in the one frame of image, and when the positional relationship is a preset positional relationship, determine that the target object in the one frame of image is occluded by the person.
By means of the technical scheme, the object movement detection method and the object movement detection device can identify the shielded target object; obtaining a first article image of the occluded target article in a first image, wherein the first image is an image acquired before the occluded target article is not occluded, and the first article image is located in a first area in the first image; obtaining a second object image of the shielded target object in a second image, wherein the second image is an image acquired after the shielded target object is not shielded any more, and the second object image is located in a first area in the second image; determining whether the occluded target item is moved during occlusion from the first item image and the second item image. According to the embodiment of the invention, the problem that the system load is large and the computer resources are wasted due to the fact that all the objects in the image are subjected to movement detection is solved by carrying out movement detection on the target objects which are shielded in the image.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flow chart illustrating an article movement detection method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a pair of pixel points provided by an embodiment of the invention;
FIG. 3 shows a schematic diagram of a difference image provided by an embodiment of the invention;
fig. 4 is a schematic structural diagram illustrating an article movement detection apparatus according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1, an article movement detection method provided in an embodiment of the present invention may include:
and S100, identifying the shielded target object.
Wherein the item may be an item that the user needs to identify in the image. For example: the product can be apple, boxed milk, hand cleanser, edible oil, etc. The user may preset one or more items to be identified. And in the case that the article set by the user is determined to be blocked in the image, determining that the article is the target article.
According to the embodiment of the invention, the object on the image can be identified through the pre-trained object identification model. The item identification model may be a convolutional neural network model. The embodiment of the invention can perform machine learning on the image characteristics of the article in the image to obtain the article identification model.
Optionally, the embodiment of the invention can identify the target object blocked by the person.
Optionally, in the embodiment of the present invention, a position relationship between a first position marker of a target item in a captured frame image and a second position marker of a person in the captured frame image may be determined, and when the position relationship is a preset position relationship, it is determined that the target item in the captured frame image is blocked by the person.
Alternatively, the first position marker and the second position marker may be both position marker points. The preset positional relationship may be: the distance between the first position mark and the second position mark is smaller than the preset position distance.
Optionally, the first position mark and the second position mark are both circumscribed rectangle frames, and the preset position relationship is as follows: the first position marker and the second position marker intersect.
Wherein, the second position mark of the human can be a human target part mark and/or a human eye watching area mark.
According to the embodiment of the invention, the human target part can be identified by using the human target part detection model obtained by convolutional neural network training, and the position of the human target part mark is determined according to the identified human target part. Alternatively, the human target site may include at least one of an ear, a nose, an eye, a neck, a shoulder, an elbow, a wrist, a waist, a knee, and an ankle.
Optionally, in the embodiment of the present invention, a three-dimensional model of a human head may be established through a three-dimensional coordinate system, and an existing face recognition technology is combined to determine an orientation of human eyes in the three-dimensional model of the human head, so as to determine a human eye gazing area, thereby determining a position of a human eye gazing area mark.
S200, obtaining a first article image of the shielded target article in the first image, wherein the first image is an image acquired before the shielded target article is not shielded, and the first article image is located in a first area in the first image.
Optionally, the image used for identifying the blocked target object and the first image are located in the same image sequence, and the images in the image sequence are sequentially arranged according to the sequence of the acquisition time. When the blocked target object is identified from the continuous multi-frame images, the first image is the image which is previous to the first image in the continuous multi-frame images.
S300, obtaining a second object image of the shielded target object in a second image, wherein the second image is an image acquired after the shielded target object is not shielded any more, and the second object image is located in a first area in the second image.
Optionally, the image used for identifying the blocked target object and the second image are located in the same image sequence, and the images in the image sequence are sequentially arranged according to the sequence of the acquisition time. When the blocked target object is identified from the continuous multi-frame images, the second image is a frame image which is next to the last frame image in the continuous multi-frame images.
According to the embodiment of the invention, when the position relation between the first position marker of the target object in the frame image and the second position marker of the person in the frame image is not the preset position relation, the target object in the frame image is determined not to be blocked any more.
S400, determining whether the occluded target item is moved during occlusion according to the first item image and the second item image.
Optionally, the embodiment of the present invention may obtain a difference image of the first item image and the second item image, and obtain a first detection result according to the difference image, where the first detection result is used to indicate whether the occluded target item is moved during the occlusion.
Specifically, in the embodiment of the present invention, a plurality of pixel point pairs can be obtained from the first article image and the second article image, each pixel point pair includes a pixel point located in the first article image and a pixel point located in the second article image, and positions of the pixel points included in the same pixel point pair in the images are matched. For example: as shown in fig. 2, the first and second item images are each composed of four pixels, and a1 and a2 are a pixel point pair, B1 and B2 are a pixel point pair, C1 and C2 are a pixel point pair, and D1 and D2 are a pixel point pair.
In one pixel point pair, the positions of the pixel points of the first article image in the first image may be the same as the positions of the pixel points of the second article image in the second image.
Optionally, in the embodiment of the present invention, the pixel values of two pixels in each pixel pair may be subtracted to obtain a difference image.
For ease of understanding, the following description is made with reference to fig. 3 on the basis of fig. 2: assuming that the pixel value of the pixel point a1 is 111, the pixel value of the pixel point B1 is 99, the pixel value of the pixel point C1 is 189, the pixel value of the pixel point D1 is 237, the pixel value of the pixel point a2 is 78, the pixel value of the pixel point B2 is 65, the pixel value of the pixel point C2 is 136, and the pixel value of the pixel point D2 is 192, an obtained difference image is composed of the pixel points having pixel values of 33, 34, 53, and 45 as shown in fig. 3.
Optionally, in the embodiment of the present invention, a first number of pixel points in the difference image that are greater than a preset pixel threshold may be determined, and whether the first number is greater than a preset number threshold is determined, if yes, it is determined that the target object is moved during the occlusion period, and if not, it is determined that the target object is not moved during the occlusion period.
The preset pixel threshold value can be set according to actual needs. For example: the preset pixel threshold may be 10.
The preset number threshold value can be set according to actual needs. For example: the preset number threshold may be set to 5% of the total number of pixels in the pixel difference image.
Optionally, in the embodiment of the present invention, the first item image and the second item image may be input to a preset movement prediction model, and a second detection result is obtained, where the second detection result is used to indicate whether the occluded target item is moved during occlusion.
The preset movement prediction model may be a deep convolutional Network model designed based on a twin Network (Siamese Network) principle.
Optionally, in the embodiment of the present invention, when the first detection result indicates that the occluded target item is moved during occlusion, the first item image and the second item image are input to the preset movement prediction model, so as to obtain a second detection result, where the second detection result is used to indicate whether the occluded target item is moved during occlusion.
Specifically, when the first detection result indicates that the target item is moved during the occlusion period, if the movement prediction result output by the preset movement prediction model is greater than a preset movement threshold, the second detection result indicates that the target item is moved during the occlusion period, and if the movement prediction result is not greater than the preset movement threshold, the second detection result indicates that the target item is not moved during the occlusion period. According to the embodiment of the invention, the second detection result is determined by combining the first detection result and the movement prediction result output by the preset movement prediction model, so that the problem of inaccurate judgment on whether the target object is moved or not due to the influence factors such as illumination, foreign matter shielding and noise can be solved.
Optionally, in the embodiment of the present invention, a first moving direction of the occluded target item in a preset coordinate system may be determined according to the first item image and the second item image, a moving direction of a plurality of other items existing in both the first image and the second image in the preset coordinate system may be determined, and a third detection result may be obtained according to the first moving direction and the moving directions of the plurality of other items, where the third detection result is used to indicate whether the occluded target item is moved during occlusion.
Optionally, when the second detection result indicates that the occluded target item is moved during occlusion, the embodiment of the present invention may determine, according to the first item image and the second item image, a first moving direction of the occluded target item in the preset coordinate system, determine moving directions of a plurality of other items existing in both the first image and the second image in the preset coordinate system, and obtain a third detection result according to the first moving direction and the moving directions of the plurality of other items, where the third detection result is used to indicate whether the occluded target item is moved during occlusion.
Specifically, the embodiment of the present invention may determine the number of items with the same moving direction in the preset coordinate system, determine whether the first moving direction of the target item is the same as the moving direction of the other items corresponding to the maximum number of items, if so, in a case that the maximum number of items is greater than a preset number threshold, the third detection result indicates that the target item is not moved during occlusion, and in a case that the maximum number of items is not greater than the preset number threshold, the third detection result indicates that the target item is moved during occlusion.
According to the embodiment of the invention, the object movement misjudgment caused by self-shaking when the image acquisition device acquires the image can be avoided by the first movement direction of the target object and the movement directions of other objects in the same coordinate system.
It can be understood that, in the embodiment of the present invention, any one of the first detection result, the second detection result, and the third detection result may be selected as a final detection result of whether the target object is moved during the occlusion.
Optionally, the embodiment of the invention may track the target item. According to the embodiment of the invention, the target object can be tracked by using the tracking mode corresponding to the detection result according to the detection result of whether the shielded target object is moved during the shielding period.
Optionally, in the embodiment of the present invention, when it is determined that the occluded target item is moved during occlusion, the target item may be tracked between different images according to the image features of the target item.
Optionally, in the embodiment of the present invention, when it is determined that the occluded target item is not moved during occlusion, the target item may be tracked according to the position mark of the target item in different images.
Optionally, when the position mark is a circumscribed rectangle, the embodiment of the present invention may track the target item according to an intersection ratio between the position mark of the target item on one frame of image and each position mark in another frame of image. For example: according to the embodiment of the invention, the object corresponding to the position mark with the largest intersection ratio with the position mark of the target object in the one frame image in the other frame image can be determined as the target object, so that the tracking of the target object is realized.
Optionally, when the position mark is a mark point, the embodiment of the present invention may track the target item according to an image distance between the position mark of the target item on one frame of image and each position mark in another frame of image. For example: according to the embodiment of the invention, the object corresponding to the position mark with the minimum distance from the position mark of the target object in the other frame of image can be determined as the target object, so that the target object is tracked.
The article movement detection method provided by the invention can identify the shielded target article; obtaining a first article image of the occluded target article in a first image, wherein the first image is an image acquired before the occluded target article is not occluded, and the first article image is located in a first area in the first image; obtaining a second object image of the shielded target object in a second image, wherein the second image is an image acquired after the shielded target object is not shielded any more, and the second object image is located in a first area in the second image; determining whether the occluded target item is moved during occlusion based on the first item image and the second item image. According to the embodiment of the invention, the problem that the system load is large and the computer resources are wasted due to the fact that all the objects in the image are subjected to movement detection is solved by carrying out movement detection on the target objects which are shielded in the image.
Optionally, the embodiment of the present invention may maintain an item library for storing item information of each item. According to the embodiment of the invention, after the target item is identified, the item information of the target item is inquired in the item library, and the item information is added into the item tracking list, so that the target item and the item information of the target item are stored in the item tracking list together.
Alternatively, the item store may be managed manually. For example: the user may enter item information for an item into the item store. Optionally, in the embodiment of the present invention, whether the article information of the article is stored in the article library may be determined according to the image feature of the article, and if the article information of the article is stored, the article information of the article is prompted to be stored, and if the article information is not stored, the article information of the article may be newly established in the article library, so that the same article is prevented from having multiple pieces of article information in the article library.
Optionally, the embodiment of the present invention may identify the first item as a lost item when the first item does not exist in the items identified from the second image, where the first item is one item identified from the first image.
Optionally, in the embodiment of the present invention, when there is no first article in each article identified from the second image and the preset number of images acquired after the second image, the first article is identified as a lost article, where the first article is an article identified from the first image.
When an item is identified as a lost item, the item and item information for the item are deleted from the item tracking list.
Optionally, in the embodiment of the present invention, when a second item does not exist in the items identified from the first image, the second item may be identified as a new item, where the second item is an item identified from the second image.
Optionally, in the embodiment of the present invention, when there is no second item in each item identified from the first image and the preset number of images acquired before the first image, the second item is identified as a new item, where the second item is an item identified from the second image.
And when the article is identified as the newly added article, adding the article and article information of the article in the article tracking list.
Optionally, in the embodiment of the present invention, when the position relationship between the first position mark of the target item in the frame of image and the second position mark of the person in the frame of image is a preset position relationship, the person-goods related event may be established.
Specifically, the embodiment of the present invention may calculate an overlapping duration of the first position mark and the second position mark, and if the overlapping duration exceeds a preset overlapping duration threshold, establish a personal-goods related event according to the human body information corresponding to the person and the item information corresponding to the target item.
And if the overlapping time length exceeds the preset overlapping time length threshold value and the target object is not moved in the shielding period, determining that the goods-and-people related event is a browsing object event. And if the overlapping time length exceeds the preset overlapping time length threshold value and the target object is moved in the shielding period, determining that the person-goods related event is an object taking event.
Optionally, the embodiment of the present invention may store the created personal-cargo related event, the user identifier corresponding to the human body, and the identifier of the article, so as to query the personal-cargo related event in the following.
Corresponding to the above method embodiment, an article movement detection apparatus provided by the embodiment of the present invention is configured as shown in fig. 4, and may include: a target item identification unit 100, a first item image obtaining unit 200, a second item image obtaining unit 300, and an item movement determination unit 400.
And a target item identification unit 100 for identifying the occluded target item.
Wherein the item may be an item that the user needs to identify in the image.
According to the embodiment of the invention, the object on the image can be identified through the pre-trained object identification model. The item identification model may be a convolutional neural network model. The embodiment of the invention can perform machine learning on the image characteristics of the article in the image to obtain the article identification model.
Optionally, the embodiment of the invention can identify the target object blocked by the person.
Optionally, the target object identifying unit 100 is specifically configured to determine a position relationship between a first position mark of a target object in a captured frame of image and a second position mark of a person in the frame of image, and when the position relationship is a preset position relationship, determine that the target object in the frame of image is occluded by the person.
Alternatively, the first position marker and the second position marker may be both position marker points. The preset positional relationship may be: the distance between the first position mark and the second position mark is smaller than the preset position distance.
Optionally, the first position mark and the second position mark are both circumscribed rectangle frames, and the preset position relationship is as follows: the first position marker and the second position marker intersect.
Wherein, the second position mark of the human can be a human target part mark and/or a human eye watching area mark.
According to the embodiment of the invention, the human target part can be identified by using the human target part detection model obtained by convolutional neural network training, and the position of the human target part mark is determined according to the identified human target part. Alternatively, the human target site may include at least one of an ear, a nose, an eye, a neck, a shoulder, an elbow, a wrist, a waist, a knee, and an ankle.
Optionally, in the embodiment of the present invention, a three-dimensional model of a human head may be established through a three-dimensional coordinate system, and an existing face recognition technology is combined to determine an orientation of human eyes in the three-dimensional model of the human head, so as to determine a human eye gazing area, thereby determining a position of a human eye gazing area mark.
The first item image obtaining unit 200 is configured to obtain a first item image of the occluded target item in a first image, where the first image is an image acquired before the occluded target item is not occluded, and the first item image is located in a first area in the first image.
Optionally, the image used for identifying the blocked target object and the first image are located in the same image sequence, and the images in the image sequence are sequentially arranged according to the sequence of the acquisition time; when the blocked target object is identified from the continuous multi-frame images, the first image is the image which is previous to the first image in the continuous multi-frame images.
The second object image obtaining unit 300 is configured to obtain a second object image of the occluded target object in a second image, where the second image is an image acquired after the occluded target object is no longer occluded, and the second object image is located in the first area in the second image.
Optionally, the image used for identifying the blocked target object and the second image are located in the same image sequence, and the images in the image sequence are sequentially arranged according to the sequence of the acquisition time; when the blocked target object is identified from the continuous multi-frame images, the second image is a frame image which is subsequent to the last frame image in the continuous multi-frame images.
According to the embodiment of the invention, when the position relation between the first position marker of the target object in the frame image and the second position marker of the person in the frame image is not the preset position relation, the target object in the frame image is determined not to be blocked any more.
An item movement determination unit 400 for determining whether the occluded target item is moved during occlusion from the first item image and the second item image.
Optionally, the article movement determining unit 400 may be specifically configured to obtain a difference image of the first article image and the second article image, and obtain a first detection result according to the difference image, where the first detection result is used to indicate whether the occluded target article is moved during the occlusion.
Specifically, in the embodiment of the present invention, a plurality of pixel point pairs can be obtained from the first article image and the second article image, each pixel point pair includes a pixel point located in the first article image and a pixel point located in the second article image, and positions of the pixel points included in the same pixel point pair in the images are matched.
In one pixel point pair, the positions of the pixel points of the first article image in the first image may be the same as the positions of the pixel points of the second article image in the second image.
Optionally, in the embodiment of the present invention, the pixel values of two pixels in each pixel pair may be subtracted to obtain a difference image.
Optionally, in the embodiment of the present invention, a first number of pixel points in the difference image that are greater than a preset pixel threshold may be determined, and whether the first number is greater than a preset number threshold is determined, if yes, it is determined that the target object is moved during the occlusion period, and if not, it is determined that the target object is not moved during the occlusion period.
The preset pixel threshold value can be set according to actual needs.
The preset number threshold value can be set according to actual needs.
Optionally, the item movement determining unit 400 may be specifically configured to input the first item image and the second item image into a preset movement prediction model, and obtain a second detection result, where the second detection result is used to indicate whether the occluded target item is moved during the occlusion period.
The preset movement prediction model may be a deep convolutional Network model designed based on a twin Network (Siamese Network) principle.
Optionally, the article movement determining unit 400 may be specifically configured to, when the first detection result indicates that the occluded target article is moved during occlusion, input the first article image and the second article image to a preset movement prediction model to obtain a second detection result, where the second detection result is used to indicate whether the occluded target article is moved during occlusion.
Specifically, when the first detection result indicates that the target item is moved during the occlusion period, if the movement prediction result output by the preset movement prediction model is greater than a preset movement threshold, the second detection result indicates that the target item is moved during the occlusion period, and if the movement prediction result is not greater than the preset movement threshold, the second detection result indicates that the target item is not moved during the occlusion period. According to the embodiment of the invention, the second detection result is determined by combining the first detection result and the movement prediction result output by the preset movement prediction model, so that the problem of inaccurate judgment on whether the target object is moved or not due to the influence factors such as illumination, foreign matter shielding and noise can be solved.
Optionally, the article movement determining unit 400 may be specifically configured to determine a first movement direction of the occluded target article in the preset coordinate system according to the first article image and the second article image, determine a movement direction of a plurality of other articles existing in both the first image and the second image in the preset coordinate system, and obtain a third detection result according to the first movement direction and the movement direction of the plurality of other articles, where the third detection result is used to indicate whether the occluded target article is moved during occlusion.
Optionally, the article movement determining unit 400 may be specifically configured to, when the second detection result indicates that the occluded target article is moved during occlusion, determine a first movement direction of the occluded target article in the preset coordinate system according to the first article image and the second article image, determine a movement direction of a plurality of other articles existing in both the first image and the second image in the preset coordinate system, and obtain a third detection result according to the first movement direction and the movement directions of the plurality of other articles, where the third detection result is used to indicate whether the occluded target article is moved during occlusion.
Specifically, the embodiment of the present invention may determine the number of items with the same moving direction in the preset coordinate system, determine whether the first moving direction of the target item is the same as the moving direction of the other items corresponding to the maximum number of items, if so, in a case that the maximum number of items is greater than a preset number threshold, the third detection result indicates that the target item is not moved during occlusion, and in a case that the maximum number of items is not greater than the preset number threshold, the third detection result indicates that the target item is moved during occlusion.
According to the embodiment of the invention, the object movement misjudgment caused by self-shaking when the image acquisition device acquires the image can be avoided by the first movement direction of the target object and the movement directions of other objects in the same coordinate system.
It can be understood that, in the embodiment of the present invention, any one of the first detection result, the second detection result, and the third detection result may be selected as a final detection result of whether the target object is moved during the occlusion.
Optionally, another article movement detecting device provided in the embodiment of the present invention further includes a target article tracking unit.
And the target item tracking unit is used for tracking the target item. The target item tracking unit may track the target item using a tracking manner corresponding to the detection result according to the detection result of whether the occluded target item is moved during occlusion determined by the item movement determination unit 400.
Optionally, the target item tracking unit is configured to track the target item between different images according to the image feature of the target item when the item movement determination unit 400 determines that the occluded target item is moved during occlusion.
Optionally, the target item tracking unit is configured to track the target item according to the position mark of the target item in the different images when the item movement determination unit 400 determines that the occluded target item is not moved during the occlusion period.
Optionally, when the position mark is a circumscribed rectangle, the target item tracking unit may track the target item according to an intersection ratio of the position mark of the target item on one frame of image and each position mark in another frame of image.
Optionally, when the position mark is a mark point, the target item tracking unit may track the target item according to an image distance between the position mark of the target item on one frame of image and each position mark in another frame of image.
The article movement detection device provided by the invention can identify the shielded target article; obtaining a first article image of the occluded target article in a first image, wherein the first image is an image acquired before the occluded target article is not occluded, and the first article image is located in a first area in the first image; obtaining a second object image of the shielded target object in a second image, wherein the second image is an image acquired after the shielded target object is not shielded any more, and the second object image is located in a first area in the second image; determining whether the occluded target item is moved during occlusion based on the first item image and the second item image. According to the embodiment of the invention, the problem that the system load is large and the computer resources are wasted due to the fact that all the objects in the image are subjected to movement detection is solved by carrying out movement detection on the target objects which are shielded in the image.
Optionally, another article movement detecting device provided in the embodiment of the present invention may further include: a lost article identification unit.
Optionally, the lost article identification unit may be configured to identify the first article as a lost article when no first article exists in the articles identified from the second image, where the first article is an article identified from the first image.
Optionally, the lost article identification unit may be configured to identify the first article as a lost article when no first article exists in each article identified from the second image and the preset number of images acquired after the second image, where the first article is an article identified from the first image.
Optionally, another article movement detecting device provided in the embodiment of the present invention may further include: and a newly added article identification unit.
Optionally, the newly added article identifying unit may be configured to identify a second article as the newly added article when the second article does not exist in the articles identified from the first image, where the second article is an article identified from the second image.
Optionally, the newly added article identification unit may be configured to identify the second article as the newly added article when there is no second article in each article identified from the first image and the preset number of images acquired before the first image, where the second article is an article identified from the second image.
Optionally, another article movement detecting device provided in the embodiment of the present invention may further include: and a person-goods related event establishing unit.
And the person-goods related event establishing unit is used for establishing a person-goods related event when the position relation between the first position mark of the target object in the frame of image and the second position mark of the person in the frame of image is a preset position relation.
Specifically, the embodiment of the present invention may calculate an overlapping duration of the first position mark and the second position mark, and if the overlapping duration exceeds a preset overlapping duration threshold, establish a personal-goods related event according to the human body information corresponding to the person and the item information corresponding to the target item.
And if the overlapping time length exceeds the preset overlapping time length threshold value and the target object is not moved in the shielding period, determining that the goods-and-people related event is a browsing object event. And if the overlapping time length exceeds the preset overlapping time length threshold value and the target object is moved in the shielding period, determining that the person-goods related event is an object taking event.
Optionally, the embodiment of the present invention may store the created personal-cargo related event, the user identifier corresponding to the human body, and the identifier of the article, so as to query the personal-cargo related event in the following.
The article movement detection device comprises a processor and a memory, wherein the target article identification unit 100, the first article image obtaining unit 200, the second article image obtaining unit 300, the article movement determination unit 400 and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more than one, and the hidden target object in the image is subjected to movement detection by adjusting kernel parameters, so that the problems of large system burden and waste of computer resources caused by movement detection of all objects in the image are solved.
An embodiment of the present invention provides a storage medium on which a program is stored, the program implementing the article movement detection method when executed by a processor.
The embodiment of the invention provides a processor, which is used for running a program, wherein the program executes the article movement detection method during running.
The embodiment of the invention provides electronic equipment, which comprises at least one processor, at least one memory and a bus, wherein the memory and the bus are connected with the processor; the processor and the memory complete mutual communication through a bus; the processor is used for calling the program instructions in the memory so as to execute the article movement detection method. The electronic device herein may be a server, a PC, a PAD, a mobile phone, etc.
The present application also provides a computer program product adapted to perform a program initialized with the above-mentioned method steps of item movement detection when executed on an electronic device.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, electronic devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, an electronic device includes one or more processors (CPUs), memory, and a bus. The electronic device may also include input/output interfaces, network interfaces, and the like.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip. The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. An article movement detection method, comprising:
identifying an occluded target item;
obtaining a first item image of the occluded target item in a first image, wherein the first image is an image acquired before the occluded target item is not occluded, and the first item image is located in a first area in the first image;
obtaining a second item image of the occluded target item in a second image, wherein the second image is an image acquired after the occluded target item is no longer occluded, and the second item image is located in the first area in the second image;
determining from the first item image and the second item image whether the occluded target item is moved during occlusion.
2. The method of claim 1, wherein the identifying the occluded target item comprises:
and determining the position relation between a first position mark of the target object in the acquired frame image and a second position mark of the person in the frame image, and determining that the target object in the frame image is blocked by the person when the position relation is a preset position relation.
3. The method of claim 2, wherein the first position marker and the second position marker are both circumscribed rectangular frames, and the predetermined positional relationship is: the first position marker and the second position marker intersect.
4. The method according to claim 1, wherein the image used for identifying the occluded target item is located in the same image sequence as the first image, and the images in the image sequence are arranged in sequence according to the sequence of acquisition time; when the blocked target object is identified from the continuous multi-frame images, the first image is a frame image which is previous to the first frame image in the continuous multi-frame images.
5. The method according to claim 1, wherein the image used for identifying the occluded target item and the second image are located in the same image sequence, and the images in the image sequence are arranged in sequence according to the sequence of the acquisition time; when the blocked target item is identified from the continuous multi-frame images, the second image is a frame image which is next to the last frame image in the continuous multi-frame images.
6. The method of claim 1, wherein said determining from said first item image and said second item image whether said occluded target item was moved during occlusion comprises:
obtaining a difference image of the first item image and the second item image, and obtaining a first detection result according to the difference image, wherein the first detection result is used for indicating whether the occluded target item is moved or not during occlusion;
and/or inputting the first item image and the second item image into a preset movement prediction model to obtain a second detection result, wherein the second detection result is used for indicating whether the occluded target item is moved or not during occlusion;
and/or determining a first moving direction of the occluded target item in a preset coordinate system according to the first item image and the second item image, determining moving directions of a plurality of other items existing in the first image and the second image in the preset coordinate system, and obtaining a third detection result according to the first moving direction and the moving directions of the plurality of other items, wherein the third detection result is used for indicating whether the occluded target item is moved during occlusion.
7. The method of claim 6, wherein said determining from said first item image and said second item image whether said occluded target item was moved during occlusion further comprises:
when the first detection result indicates that the occluded target item is moved during occlusion, inputting the first item image and the second item image to a preset movement prediction model to obtain a second detection result, wherein the second detection result is used for indicating whether the occluded target item is moved during occlusion.
8. The method of claim 7, wherein said determining from said first item image and said second item image whether said occluded target item was moved during occlusion further comprises:
when the second detection result indicates that the occluded target item is moved during occlusion, determining a first moving direction of the occluded target item in the preset coordinate system according to the first item image and the second item image, determining moving directions of a plurality of other items in the preset coordinate system, wherein the plurality of other items are both present in the first image and the second image, and obtaining a third detection result according to the first moving direction and the moving directions of the plurality of other items, wherein the third detection result is used for indicating whether the occluded target item is moved during occlusion.
9. An article movement detection device, comprising: a target article identification unit, a first article image obtaining unit, a second article image obtaining unit and an article movement determining unit,
the target object identification unit is used for identifying the shielded target object;
the first item image obtaining unit is used for obtaining a first item image of the occluded target item in a first image, wherein the first image is an image acquired before the occluded target item is not occluded, and the first item image is located in a first area in the first image;
the second object image obtaining unit is configured to obtain a second object image of the occluded target object in a second image, where the second image is an image acquired after the occluded target object is no longer occluded, and the second object image is located in the first area in the second image;
the article movement determining unit is used for determining whether the occluded target article is moved during occlusion according to the first article image and the second article image.
10. The apparatus according to claim 9, wherein the target item identification unit is specifically configured to determine a positional relationship between a first position marker of the target item in the captured one frame of image and a second position marker of the person in the one frame of image, and determine that the target item in the one frame of image is occluded by the person when the positional relationship is a preset positional relationship.
CN202011471372.4A 2020-12-14 2020-12-14 Article movement detection method and device Pending CN112529959A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011471372.4A CN112529959A (en) 2020-12-14 2020-12-14 Article movement detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011471372.4A CN112529959A (en) 2020-12-14 2020-12-14 Article movement detection method and device

Publications (1)

Publication Number Publication Date
CN112529959A true CN112529959A (en) 2021-03-19

Family

ID=74999671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011471372.4A Pending CN112529959A (en) 2020-12-14 2020-12-14 Article movement detection method and device

Country Status (1)

Country Link
CN (1) CN112529959A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10438277B1 (en) * 2014-12-23 2019-10-08 Amazon Technologies, Inc. Determining an item involved in an event
CN111145430A (en) * 2019-12-27 2020-05-12 北京每日优鲜电子商务有限公司 Method and device for detecting commodity placing state and computer storage medium
CN111223260A (en) * 2020-01-19 2020-06-02 上海智勘科技有限公司 Method and system for intelligently monitoring goods theft prevention in warehousing management
CN111507315A (en) * 2020-06-15 2020-08-07 杭州海康威视数字技术股份有限公司 Article picking and placing event detection method, device and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10438277B1 (en) * 2014-12-23 2019-10-08 Amazon Technologies, Inc. Determining an item involved in an event
CN111145430A (en) * 2019-12-27 2020-05-12 北京每日优鲜电子商务有限公司 Method and device for detecting commodity placing state and computer storage medium
CN111223260A (en) * 2020-01-19 2020-06-02 上海智勘科技有限公司 Method and system for intelligently monitoring goods theft prevention in warehousing management
CN111507315A (en) * 2020-06-15 2020-08-07 杭州海康威视数字技术股份有限公司 Article picking and placing event detection method, device and equipment

Similar Documents

Publication Publication Date Title
CN107358149B (en) Human body posture detection method and device
KR20200066371A (en) Event camera-based deformable object tracking
CN105894464B (en) A kind of medium filtering image processing method and device
CN110443210A (en) A kind of pedestrian tracting method, device and terminal
CN107330386A (en) A kind of people flow rate statistical method and terminal device
CN105556539A (en) Detection devices and methods for detecting regions of interest
CN111104925B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN111091025B (en) Image processing method, device and equipment
CN110826610A (en) Method and system for intelligently detecting whether dressed clothes of personnel are standard
CN103761505A (en) Object tracking embodiments
CN111914665A (en) Face shielding detection method, device, equipment and storage medium
Goudelis et al. Fall detection using history triple features
Gomes et al. Visual attention guided features selection with foveated images
CN112380946B (en) Fall detection method and device based on end-side AI chip
CN111563492B (en) Fall detection method, fall detection device and storage device
CN112598703A (en) Article tracking method and device
CN108109175A (en) The tracking and device of a kind of image characteristic point
CN112529959A (en) Article movement detection method and device
CN112183284B (en) Safety information verification and designated driving order receiving control method and device
CN114627542A (en) Eye movement position determination method and device, storage medium and electronic equipment
CN114360015A (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN114333067B (en) Behavior activity detection method, behavior activity detection device and computer readable storage medium
CN112598745A (en) Method and device for determining person-goods associated events
CN108573230A (en) Face tracking method and face tracking device
CN109389089A (en) More people's Activity recognition method and devices based on intelligent algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination