CN113627323B - Image processing method, robot and medium - Google Patents

Image processing method, robot and medium Download PDF

Info

Publication number
CN113627323B
CN113627323B CN202110909107.8A CN202110909107A CN113627323B CN 113627323 B CN113627323 B CN 113627323B CN 202110909107 A CN202110909107 A CN 202110909107A CN 113627323 B CN113627323 B CN 113627323B
Authority
CN
China
Prior art keywords
image
target
sub
state
time point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110909107.8A
Other languages
Chinese (zh)
Other versions
CN113627323A (en
Inventor
徐卓立
杨亚运
刘玉豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Keenlon Intelligent Technology Co Ltd
Original Assignee
Shanghai Keenlon Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Keenlon Intelligent Technology Co Ltd filed Critical Shanghai Keenlon Intelligent Technology Co Ltd
Priority to CN202110909107.8A priority Critical patent/CN113627323B/en
Publication of CN113627323A publication Critical patent/CN113627323A/en
Application granted granted Critical
Publication of CN113627323B publication Critical patent/CN113627323B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an image processing method, a robot and a medium. The method comprises the following steps: obtaining and updating a reference background image of the target area; acquiring a first state image of a target area at the current time point from the target area image, and identifying each target object and the area position of the target object in the target area so as to extract a first sub-image according to the area position. Acquiring a second state image of a target area at the next time point, and extracting a second sub-image according to the position of the area; and determining first state change information of each target object according to the first sub-image and the second sub-image. By operating the technical scheme provided by the embodiment of the invention, the problem that the accuracy of subsequent image processing is possibly affected due to different illumination conditions during different image acquisition can be solved, and the effect of improving the accuracy of identifying the state change of the target object in the region is realized.

Description

Image processing method, robot and medium
Technical Field
The embodiment of the invention relates to an image processing technology, in particular to an image processing method, a robot and a medium.
Background
With the development of computer technology, a change of a state of a target object in a fixed area is often determined by means of image recognition, for example, whether a target person in the fixed area is walking or not is determined.
In the prior art, the similarity between different images in the same area is often directly compared to determine whether the state of the target object in the image is changed, however, due to different illumination conditions during different image acquisition, the accuracy of similarity determination may be affected, so that the accuracy of identifying the state change of the target object is reduced.
Disclosure of Invention
The embodiment of the invention provides an image processing method, a robot and a medium, which are used for realizing improvement of the accuracy of identifying the state change of a target object in an area.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
Image acquisition is carried out on a target area which does not comprise a target object through an image acquisition device every first preset time length to obtain and update a reference background image of the target area according to a first preset rule;
acquiring an image of the target area through the image acquisition device to obtain a first state image of the target area at the current time point, and identifying each target object according to the first state image and the reference background image;
obtaining the region position of each target object in the target region according to the identification result, and extracting a first sub-image of the corresponding target object from the first state image according to the region position;
Acquiring an image of the target area through the image acquisition device to obtain a second state image of the target area at the next time point, and extracting a second sub-image of the corresponding target object from the second state image according to the area position of each target object in the target area;
And determining first state change information of each target object at the region position according to the first sub-image and the second sub-image.
Optionally, updating the reference background image of the target area according to a first preset rule includes:
acquiring a current background image of the target area, and comparing the current background image with the reference background image;
And if the similarity between the current background image and the reference background image is greater than or equal to a preset threshold value, updating the current background image into a new reference background image.
Optionally, the method is applied to an object-sending robot including an image collector, and the next time point is an arrival time point when the object-sending robot arrives at a preset task destination, and after extracting the first sub-image of the corresponding target object, the method further includes:
When the time point arrives, the image collector is used for collecting the image of the target area, and a third state image of the target area is obtained;
and extracting a third sub-image of the corresponding target object from the third state image according to the region position so as to update the first sub-image through the third sub-image.
Optionally, image acquisition is performed on the target area by the image collector to obtain a first state image of the target area at the current time point, including:
Acquiring an image of the target area through the image acquisition device to obtain a current state image of the target area;
Determining whether the target object exists in the target area according to the reference background image and the current state image;
If so, the current state image is determined to be the first state image.
Optionally, extracting a first sub-image of the corresponding target object from the first state image according to the region position includes:
Obtaining a first foreground image of the target area according to the reference background image and the first state image;
extracting the first sub-image from the first foreground image according to the region position;
And extracting a second sub-image from the second state image according to the region position of each target object in the target region, including:
obtaining a second foreground image of the target area according to the reference background image and the second state image;
And extracting the second sub-image from the second foreground image according to the region position of each target object in the target region.
Optionally, after determining the first state change information of each target object at the area position, the method further includes:
Responding to the state confirmation operation of the user on each target object, and acquiring state change confirmation information of each target object;
Judging whether the state change confirmation information is consistent with the first state change information;
If not, acquiring the image of the target area through the image acquisition device to obtain a fourth state image of the target area at the state change confirmation time point.
Optionally, determining, according to the first sub-image and the second sub-image, first state change information of each target object at the area position includes:
Determining a similarity comparison result between the first sub-image and the second sub-image; the similarity comparison result is determined according to the brightness comparison result, the contrast comparison result and the structure comparison result of the image color channel;
And determining whether the state of the target object at the position of the area is changed according to whether the similarity comparison result is larger than a preset threshold value.
Optionally, the method is applied to an object-delivering robot including an image collector, the next time point is an arrival time point when the object-delivering robot arrives at a preset task destination, and the image collector is used for collecting the image of the target area to obtain a second state image of the target area at the next time point, including:
And from the arrival time point, carrying out image acquisition on the target area by the image acquisition device every second preset time, so as to obtain and update the second state image according to a second preset rule.
Optionally, when the method is applied to an object-delivering robot including an image collector, determining first state change information of each target object at the area position according to the first sub-image and the second sub-image, including:
Determining a target task object set corresponding to the current task destination from all target objects according to the association relation between the task destination preset by the robot and the target objects;
determining a first sub-image set corresponding to the target task object from the first sub-image according to the target task object set, and determining a second sub-image set corresponding to the target task object from the second sub-image;
and determining first state change information of each target task object in the target task object set at the region position according to the first sub-image set and the second sub-image set.
Optionally, after determining the first state change information of each target task object in the target task object set at the region position, the method further includes:
if the first state change information does not meet the preset state change condition, determining a non-target task object set from all target objects according to the association relation between the task destination preset by the robot and the target objects;
Determining a third sub-image set corresponding to the non-target task object from the first sub-image according to the non-target task object set, and determining a fourth sub-image set corresponding to the non-target task object from the second sub-image;
And determining second state change information of each non-target task object in the non-target task object set at the region position according to the third sub-image set and the fourth sub-image set.
Optionally, the method is applied to an object-delivering robot including an image collector, the current time point is a starting time point of the object-delivering robot for delivering the target object, and the next time point is an arrival time point of the object-delivering robot for reaching a preset task destination;
Correspondingly, before the image acquisition is carried out on the target area through the image acquisition device to obtain the second state image of the target area at the next time point, the method further comprises the following steps:
if a pause event is detected between the starting time point and the arrival time point, acquiring an image of an object placing area through the image acquisition device, and obtaining a pause state image of the object placing area;
extracting a pause sub-image of a corresponding target object from the pause state image according to the region position of each target object in the target region;
If a post-pause start event is detected between the start time point and the arrival time point, acquiring an image of a target area through the image acquisition device to obtain a post-pause start state image of the target area;
Extracting a post-pause start-up sub-image of a corresponding target object from the post-pause start-up state image according to the region position of each target object in the target region;
determining whether the state of the target object at the area position is changed or not according to the pause sub-image and the start sub-image after pause;
If not, determining to execute the operation of acquiring the image of the target area through the image acquisition device to obtain a second state image of the target area at the next time point.
In a second aspect, an embodiment of the present invention further provides a robot, including:
the image collector is used for collecting images;
One or more processors;
Storage means for storing one or more programs,
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the image processing method as described above.
In a third aspect, embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method as described above.
According to the embodiment of the invention, the latest reference background image is obtained by updating the reference background image of the target area, and each target object is identified according to the updated reference background image and the first state image, so that the influence on target object identification caused by different light rays when the target object is identified and the light rays when the reference background image is acquired is avoided. And the regional images of the target objects at different time points are compared, so that the comparison image is only related to the regional position of the target object and is not related to the images outside the regional position, and the image noise is greatly reduced. The method solves the problem that the accuracy of the subsequent image comparison can be affected due to different illumination conditions of images of different areas in the same area during acquisition. The effect of improving the identification accuracy of the state change of the target object in the area is achieved.
Drawings
Fig. 1 is a flowchart of an image processing method according to a first embodiment of the present invention;
fig. 2 is a flowchart of an image processing method according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an image processing apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a robot according to a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, where the method may be implemented by an image processing apparatus according to an embodiment of the present invention, and the apparatus may be implemented by software and/or hardware. Referring to fig. 1, the image processing method provided in this embodiment includes:
Step 110, image acquisition is carried out on a target area which does not comprise a target object through an image acquisition device every first preset time length to obtain and update a reference background image of the target area according to a first preset rule.
The image collector can be an image pickup device and is used for collecting images of the target area and can be arranged at a fixed position so as to ensure that the range of the images of the target area collected at different time points is the same. The target area is a designated fixed area, such as a placement area of a delivery robot, or a person passing area of an elevator car, etc., and the target object is an object of interest in the target area, such as a placement object in the placement area, a person in the person passing area, etc., which is not limited in this embodiment.
The first preset duration is a preset duration, for example, 1 minute. And carrying out image acquisition on a target area which does not comprise a target object once every first preset time length to obtain a reference background image of the target area, wherein the background image is an image acquired when the target object does not exist in the target area, and the reference background image is used as a reference image in all acquired background images for subsequent image processing.
Whether the target object exists in the target area can be identified through an image processing algorithm or not, or can be confirmed manually, and the embodiment is not limited to this.
Updating the reference background image of the target area according to a first preset rule to update the reference background image obtained at the previous time into the reference background image obtained at the current time, thereby obtaining the latest reference background image.
In this embodiment, optionally, updating the reference background image of the target area according to the first preset rule includes:
acquiring a current background image of the target area, and comparing the current background image with the reference background image;
And if the similarity between the current background image and the reference background image is greater than or equal to a preset threshold value, updating the current background image into a new reference background image.
And acquiring a current background image of the target area, comparing the current background image with a reference background image, and if the similarity between the current background image and the reference background image is greater than or equal to a preset threshold value, indicating that the current background image is similar to the reference background image, and updating the current background image into the reference background image at the moment so as to ensure the effectiveness of the reference background image.
When the similarity between the current background image and the reference background image is smaller than a preset threshold value, the fact that the current background image is greatly changed possibly due to light change and the like is indicated, the current background image can be abandoned at the moment, the influence on updating of the reference background image is avoided, and only the similar current background image is updated to the reference background image, so that the updating accuracy of the reference background image is improved.
And 120, acquiring an image of the target area through the image acquisition device to obtain a first state image of the target area at the current time point, and identifying each target object according to the first state image and the reference background image.
The current time point is the time point when the image collector collects a first state image of the target area, and the first state image is an image with the target object. The identification of each target object from the first state image and the reference background image may be by an algorithm such as a background subtraction method, which is not limited in this embodiment.
In this embodiment, optionally, image acquisition is performed on the target area by the image collector to obtain a first state image of the target area at the current time point, including:
Acquiring an image of the target area through the image acquisition device to obtain a current state image of the target area;
Determining whether the target object exists in the target area according to the reference background image and the current state image;
If so, the current state image is determined to be the first state image.
And acquiring an image of the target area through the image acquisition device to obtain a current state image of the target area, wherein the current state image is the image of the target area acquired by the image acquisition device before or at the current time point.
Determining whether the target object exists in the target area according to the reference background image and the current state image can be to make a difference between the current state image and the reference background image to obtain a foreground image, and judging whether the target object exists in the foreground image.
If so, the current state image is determined to be the first state image. If the target area does not exist, the image collector continues to collect the image of the target area. Therefore, the first state image is determined in time, and the subsequent efficiency of determining the position of the state area of the target object is improved.
And 130, obtaining the region position of each target object in the target region according to the identification result, and extracting a first sub-image of the corresponding target object from the first state image according to the region position.
The method comprises the steps of obtaining the region position of each target object in a target region according to the identification result, obtaining four corner coordinates of a rectangle by taking a minimum circumscribed rectangle for the outline of the identified target object, and taking the four coordinates as the region position of the target object in the target region.
The first sub-image of the corresponding target object is extracted from the first state image according to the region position, which may be the first sub-image of the target object that is the image of the region position in the first state image, for example, if the target object is a meal, the first sub-image is the meal image of the region position of the meal in the first state image.
And 140, performing image acquisition on the target area through the image acquisition device to obtain a second state image of the target area at the next time point, and extracting a second sub-image of the corresponding target object from the second state image according to the area position of each target object in the target area.
And acquiring the target area through the same image acquisition device to obtain a second state image of the target area of the target object at the next time point. The next time point is a time point after the current time point, and may be a preset time point, for example, five minutes after the current time point is taken as the next time point; the time point triggered according to the preset condition may also be a time point when the preset condition is satisfied, for example, as a next time point, which is not limited in this embodiment.
The second sub-image of the corresponding target object is extracted from the second state image according to the region position of each target object in the target region, and the image at the region position in the second state image can be taken as the second sub-image of the target object.
In this embodiment, optionally, extracting a first sub-image of the corresponding target object from the first state image according to the region position includes:
Obtaining a first foreground image of the target area according to the reference background image and the first state image;
extracting the first sub-image from the first foreground image according to the region position;
And extracting a second sub-image from the second state image according to the region position of each target object in the target region, including:
obtaining a second foreground image of the target area according to the reference background image and the second state image;
And extracting the second sub-image from the second foreground image according to the region position of each target object in the target region.
And taking the image at the regional position in the first foreground image as a first sub-image.
And obtaining a second foreground image of the target area by differencing the reference background image and the second state image, and taking the image at the area position in the second foreground image as a second sub-image.
Sub-images are extracted from the foreground images, and pertinence of the first sub-image and the second sub-image to a target object is improved. The images at the same region position are used as the first sub-image and the second sub-image and are compared, namely, only the change condition of the target object at the region position is concerned, so that the compared image is only related to the region position of the target object and is not related to the images outside the region position, the image noise is greatly reduced, and the accuracy of the subsequent determination of the first state change information is improved.
And step 150, determining first state change information of each target object at the region position according to the first sub-image and the second sub-image.
The first state change information of each target object at the region position is determined according to the first sub-image and the second sub-image, and the first state change information of each target object can be determined according to the comparison result by respectively comparing the first sub-image and the second sub-image corresponding to each target object. The first state change information is used for reflecting the change of the state of the target object in the area, and the corresponding first state change information can be determined according to the type of the target object, and for example, if the target object is an object, the first state change information can be used for reflecting whether the object is taken away, and if the target object is a person, the first state change information can be used for reflecting whether the person walks or not.
In this embodiment, optionally, determining, according to the first sub-image and the second sub-image, first state change information of each target object at the location of the area includes:
Determining a similarity comparison result between the first sub-image and the second sub-image; the similarity comparison result is determined according to the brightness comparison result, the contrast comparison result and the structure comparison result of the image color channel;
And determining whether the state of the target object at the position of the area is changed according to whether the similarity comparison result is larger than a preset threshold value.
The similarity comparison result between the first sub-image and the second sub-image is determined by determining the similarity comparison result, the contrast comparison result and the structure comparison result according to the image color channel, and the specific determining process of the brightness comparison result, the contrast comparison result and the structure comparison result for the single image color channel may be:
The brightness comparison result can be obtained by filtering the average gray value through convolution operation based on a window in the image and calculating standard deviation for discrete pixel signal values; the contrast comparison result may be obtained by calculating a variance for the image signal; the result of the structural comparison may be obtained by calculating covariance on the image signal.
And determining the channel comparison result of the single image color channel through the brightness comparison result, the contrast comparison result and the structure comparison result, wherein different weights can be set for the brightness comparison result, the contrast comparison result and the structure comparison result in a determination mode, and the channel comparison result is finally obtained by combining the comparison results.
And respectively acquiring channel comparison results of the channels for different color channels, such as RGB channels. The average value of the comparison results of the channels may be taken as the final similarity comparison result. Judging whether the similarity comparison result is larger than a preset threshold value, if not, determining that the state of the target object in the region position changes.
The similarity comparison result is determined according to the brightness comparison result, the contrast comparison result and the structure comparison result of the image color channel, and the similarity comparison result is determined in multiple aspects, so that the accuracy and pertinence of the determination of the similarity comparison result are improved, and the accuracy of determining the first state change information of each target object at the position of the region according to the similarity comparison result is improved.
In this embodiment, optionally, after determining the first state change information of each target object at the area position, the method further includes:
Responding to the state confirmation operation of the user on each target object, and acquiring state change confirmation information of each target object;
Judging whether the state change confirmation information is consistent with the first state change information;
If not, acquiring the image of the target area through the image acquisition device to obtain a fourth state image of the target area at the state change confirmation time point.
The user may perform the confirmation operation on the state of each target object by touching a specific key related to the state confirmation, or may perform the confirmation operation by sending an instruction through the APP related to the state confirmation, which is not limited in this embodiment. The acquiring of the state change confirmation information of each target object may be acquiring an overall state change of all target objects, and when the target object is an article, the state change confirmation information may be an example of confirming that the article has been completely removed, that is, that the state of the target object is completely removed.
And judging whether the state change confirmation information is consistent with the first state change information, namely judging whether the state change confirmation information corresponding to each target object is consistent with the first state change information. For example, if the state change confirmation information indicates that all the target objects are removed, the first state change information indicates that all the target objects are not removed.
And if the state change confirmation information is inconsistent with the first state change information, acquiring a fourth state image of the target area at the state change confirmation time point through image acquisition of the target area. The state change confirmation time point is a time point when the user performs a state confirmation operation. At this time, a fourth state image of the target area is acquired and stored for subsequent analysis. The number of fourth-state image storages may be determined according to the size of the storage space, and when the image storage amount exceeds the size of the storage space, an image having an earlier storage time may be deleted.
Judging whether the state change confirmation information is consistent with the first state change information, and acquiring a fourth state image under the condition that the user confirmation result is inconsistent with the image recognition result, so that whether the first state change information obtained by image processing is correct or not can be judged according to the fourth state image later, and the image processing method is improved conveniently.
According to the technical scheme provided by the embodiment, the state change of each target object is determined by comparing the sub-images at different time points at the same position, and the area images of the target objects at different time points are compared, so that the compared images are only related to the area positions of the target objects and are not related to the images outside the area positions, the image noise is greatly reduced, and the problem that when the target objects are small in change, for example, ten target objects are shared, only one target object is changed, if the overall similarity of the images of different area is directly compared, whether the state of the target object is changed is judged, and the state change of the target object cannot be accurately identified due to the fact that the change range is small is solved.
And because the images of the same area are collected at different times, the collected images have obvious differences, such as huge differences of pixel gray values, differences of shadow shielding areas of surrounding objects irradiated by the light sources, differences in image color gamut and the like, due to the difference of illumination intensity, illumination color, incident angle of the light sources, shadow shielding areas and other illumination conditions. If the overall similarity of the images of different areas is directly compared, whether the state of the target object is changed or not is judged, the image comparison result of the target object with unchanged actual state possibly caused by the image difference caused by illumination change is low in similarity, and the accuracy of image processing is affected. Therefore, the latest reference background image of the target area is obtained, each target object is identified according to the first state image and the reference background image, the problem that the target object identification is affected due to the fact that light rays during target object identification are different from light rays during reference background image acquisition is avoided, and accuracy of target object identification is improved. And the illumination condition is changed, but the position information corresponding to the target object is not changed, sub-images at different time points at the same position are compared later, the state change of each target object is determined, and the state change identification accuracy of the target object in the area is improved.
Example two
Fig. 2 is a flowchart of an image processing method according to a second embodiment of the present invention, and the present invention is implemented for a process after a first sub-image of a corresponding target object is extracted, and is applied to an object-delivering robot including an image collector. Compared with the above scheme, the method specifically optimizes that the next time point is an arrival time point when the object delivery robot arrives at a preset task destination, and after extracting the first sub-image of the corresponding target object, the method further includes:
When the time point arrives, the image collector is used for collecting the image of the target area, and a third state image of the target area is obtained;
and extracting a third sub-image of the corresponding target object from the third state image according to the region position so as to update the first sub-image through the third sub-image.
In this embodiment, the type of the feeding robot may be a non-closed cabin type, for example, a restaurant feeding robot of a single-side type or a double-side type, and in the process of performing the feeding task, the accuracy of image processing may be affected if the ambient light conditions of different areas, for example, the light intensity, the light color, the incident angle of the light source, the shadow shielding area, and the like, change greatly due to the need of passing through different areas. The image processing method in the embodiment of the invention can well solve the problems.
Specifically, the flow chart of the image processing method is shown in fig. 2:
Step 210, performing image acquisition on a target area which does not comprise a target object through an image acquisition device every a first preset time length to obtain and update a reference background image of the target area according to a first preset rule.
And 220, acquiring an image of the target area through the image acquisition device to obtain a first state image of the target area at the current time point, and identifying each target object according to the first state image and the reference background image.
Step 230, obtaining the region position of each target object in the target region according to the recognition result, and extracting the first sub-image of the corresponding target object from the first state image according to the region position.
And 240, when the time point is reached, acquiring an image of the target area through the image acquisition device, and obtaining a third state image of the target area.
The arrival time point is an arrival time point when the object-delivering robot arrives at a preset task destination, and at the moment, the image acquisition is carried out on the target area through the image acquisition device, so that a third state image of the target area is obtained. The third state image capturing time is higher than the second state image, that is, the third state image is captured first and then the second state image is captured.
Step 250, extracting a third sub-image corresponding to the target object from the third state image according to the region position, so as to update the first sub-image through the third sub-image.
The reference background image and the third state image may be differenced to obtain a third foreground image of the target region, the image at the region position in the third foreground image is taken as a third sub-image, and the third sub-image is updated to the first sub-image.
Step 260, performing image acquisition on the target area by using the image acquisition device to obtain a second state image of the target area at the arrival time point, and extracting a second sub-image of the corresponding target object from the second state image according to the area position of each target object in the target area.
In this embodiment, optionally, image acquisition is performed on the target area by the image collector to obtain a second state image of the target area at a next time point, including:
And from the arrival time point, carrying out image acquisition on the target area by the image acquisition device every second preset time, so as to obtain and update the second state image according to a second preset rule.
The second preset duration is a preset duration, for example, 1 second, that is, from the arrival time point, the image acquisition is performed on the target area once every second preset duration, and the second state image is updated according to a second preset rule, where the second preset rule may be to directly update the second state image obtained at the previous time to the second state image obtained at the current time, so as to obtain the latest second state image, which is not limited in this embodiment.
The method has the advantages that after the robot reaches the preset task destination, a user does not take objects immediately, or the target objects are not the same, and the like, the second sub-image of the corresponding target object can be timely extracted according to the latest second state image, so that the first state change information of each target object in the area position is determined according to the first sub-image and the second sub-image, whether the single target object in the target area is taken or whether a plurality of target objects are completely taken is determined, and the accuracy and efficiency of determining the first state change information and the effectiveness of conveying objects are improved.
In this embodiment, optionally, the current time point is a start time point at which the object delivery robot delivers the target object;
Correspondingly, before the image acquisition is carried out on the target area through the image acquisition device to obtain the second state image of the target area at the next time point, the method further comprises the following steps:
if a pause event is detected between the starting time point and the arrival time point, acquiring an image of an object placing area through the image acquisition device, and obtaining a pause state image of the object placing area;
extracting a pause sub-image of a corresponding target object from the pause state image according to the region position of each target object in the target region;
If a post-pause start event is detected between the start time point and the arrival time point, acquiring an image of a target area through the image acquisition device to obtain a post-pause start state image of the target area;
Extracting a post-pause start-up sub-image of a corresponding target object from the post-pause start-up state image according to the region position of each target object in the target region;
determining whether the state of the target object at the area position is changed or not according to the pause sub-image and the start sub-image after pause;
If not, determining to execute the operation of acquiring the image of the target area through the image acquisition device to obtain a second state image of the target area at the next time point.
The current time point is a starting time point of the object conveying robot for conveying the target object, namely, a time point of the object conveying robot for starting conveying the target object.
The suspension event is an event of suspension of the robot in the moving process, and can be manually triggered or automatically triggered by the robot, wherein the manual triggering can be manual touching of a scram key on the robot, and the like; the automatic triggering of the robot can automatically pause when the robot avoids people or objects, and the like. And if a pause event is detected between the starting time point and the reaching time point, the image acquisition is carried out on the target area through the image acquisition device, so that a pause state image of the target area is obtained.
The extracting of the pause sub-image of the corresponding target object from the pause state image may be a pause sub-image having the image at the region position in the pause state image as the target object, according to the region position of each target object in the target region.
The starting event after suspension is an event that the robot is restarted after suspension, and the starting event can be manually triggered or automatically triggered by the robot, and the manual triggering can be that the robot manually touches a starting key on the robot after suspension, and the like; the automatic triggering of the robot can be started after the robot is automatically suspended after avoiding people or objects successfully. If the post-pause start event is detected, the image acquisition is carried out on the target area through the image acquisition device, and a post-pause start state image of the target area is obtained.
The post-pause start sub-image of the corresponding target object is extracted from the post-pause start state image according to the region position of each target object in the target region, and the image at the region position in the post-pause start state image can be used as the post-pause start sub-image of the target object.
And determining whether the state of each target object at the region position is changed according to the pause sub-image and the start sub-image after pause, and comparing the pause sub-image and the pause sub-image corresponding to each target object respectively, and determining whether the state of the target object is changed according to the comparison result.
If so, the target object is manually or otherwise moved in the starting process of the robot from pause to pause, the target object is indicated to be changed in state in advance, and a lamplight or voice prompt can be carried out or an alarm is sent to a manager. The corresponding prompt can be performed according to the target area corresponding to the target object with the changed state, and for example, if the state change of the target object in the target area of the X, Y, Z th layer in the robot is detected in sequence, the prompt mode can be that a popup window containing the X, Y, Z th layer of the documents is displayed in sequence in a display screen of the robot according to a preset time interval; if the state change of the target object in the target area of the X, Y, Z th layer in the robot is detected at the same time, the prompting mode can be to display a popup window for ' detecting that the articles of the X, Y and Z layers are changed ' in a display screen of the robot, and the task of the layer needs to be manually fetched ', which is not limited in the embodiment, so that the prompting pertinence is improved and the user experience is improved.
If the states of the target object in the region positions are not changed, determining to continue to execute the operation of acquiring the images of the target region comprising the target object through the image acquisition device to obtain a second state image of the target region at the next time point. Therefore, whether the state of the target object is changed in the process from pause to start after pause of the robot is confirmed, and when the first sub-image is directly compared with the start sub-image after pause, the light condition is possibly changed due to the fact that the acquisition environments of the first sub-image and the start sub-image after pause are different, so that the accuracy of an image processing result is affected, and the pertinence of image processing is improved. And determining whether the subsequent object conveying task can be continuously executed by determining whether the state of the target object is changed, so that the effectiveness of object conveying is improved.
Step 270, determining first state change information of each target object at the region position according to the first sub-image and the second sub-image.
And determining first state change information of each target object at the position of the area according to the first sub-image and the second sub-image which are obtained after the third sub-image is updated.
In this embodiment, optionally, determining, according to the first sub-image and the second sub-image, first state change information of each target object at the location of the area includes:
Determining a target task object set corresponding to the current task destination from all target objects according to the association relation between the task destination preset by the robot and the target objects;
determining a first sub-image set corresponding to the target task object from the first sub-image according to the target task object set, and determining a second sub-image set corresponding to the target task object from the second sub-image;
and determining first state change information of each target task object in the target task object set at the region position according to the first sub-image set and the second sub-image set.
The preset task destination is a destination reached by the object delivery robot for delivering the target object, and if the target object is a meal, the task destination may be a dining table. The association relation between the task destination and the target object can be preset, for example, the task destination I is a dining table A, the target objects with the association relation are a first food and a second food, the task destination II is a dining table B, and the target objects with the association relation are a third food and a fourth food.
And determining a target task object set formed by target task objects corresponding to the current task destination from all target objects according to the task destination to which the robot is currently directed. Illustratively, if all target objects are ABCDE and the target object having an association relationship with the current task destination is ABC, the target task object set includes target task object ABC.
And determining a first sub-image set corresponding to the target task object from the first sub-images according to the target task object set, namely acquiring the first sub-images corresponding to the target task objects in the target task object set. And determining a second sub-image set corresponding to the target task object from the second sub-images, namely acquiring the second sub-images corresponding to the target task objects in the target task object set.
And comparing the first sub-image in the first sub-image set with the second sub-image in the corresponding second sub-image set respectively to obtain first state change information of each target task object in the target task object set. The first sub-image and the second sub-image corresponding to the target task object a are compared to obtain first state change information of the target task object a. Thereby obtaining first state change information of all target task objects.
And determining first state change information of each target task object in the target task object set at the region position through the first sub-image set and the second sub-image set corresponding to the target task object, namely only comparing the image region related to the target object related to the task destination, so that when a plurality of task destinations exist, comparison of all the target objects is avoided, and the accuracy of the comparison is reduced. By improving the pertinence of comparison, the accuracy of comparison result acquisition is improved.
In this embodiment, optionally, after determining the region position of the first state change information of the region position of each target task object in the target task object set, the method further includes:
if the first state change information does not meet the preset state change condition, determining a non-target task object set from all target objects according to the association relation between the task destination preset by the robot and the target objects;
Determining a third sub-image set corresponding to the non-target task object from the first sub-image according to the non-target task object set, and determining a fourth sub-image set corresponding to the non-target task object from the second sub-image;
And determining second state change information of each non-target task object in the non-target task object set at the region position according to the third sub-image set and the fourth sub-image set.
The preset state change condition is used for determining the state change condition of the target object, and may be that the first state change information of each target task object in the target task object set is taken away, which is not limited in this embodiment.
If the first state change information does not meet the preset state change condition, the first state change information of at least one target task object is not taken away, namely the object taking task fails, and then a non-target task object set is determined from all target objects according to the association relation between the preset task destination and the target objects.
And determining a non-target task object set formed by non-target task objects irrelevant to the current task destination from all target objects according to the task destination to which the robot is currently destined. Illustratively, if all target objects are ABCDE and target objects having an association relationship with the current task destination are ABC, the non-target task object set includes the non-target task object DE.
And determining a third sub-image set corresponding to the target task object from the first sub-images according to the target task object set, namely acquiring the first sub-images corresponding to the non-target task objects in the non-target task object set. And determining a fourth sub-image set corresponding to the target task object from the second sub-images, namely acquiring the second sub-images corresponding to the non-target task objects in the non-target task object set.
And comparing the first sub-image in the third sub-image set with the second sub-image in the corresponding fourth sub-image set respectively to obtain second state change information of each non-target task object in the non-target task object set. The first sub-image and the second sub-image corresponding to the non-target task object E are compared to obtain second state change information of the non-target task object E. Thereby obtaining second state change information for all non-target task objects.
And under the condition that the first state change information does not meet the preset state change condition, determining the second state change information of the non-target task object in the region position of the non-target task object set, and determining the reason that the first state change information does not meet the preset state change condition according to the second state change information. For example, if the second state change information of the at least one non-target task object is removed, the reason that the first state change information does not meet the preset state change condition may be that the current task destination user does not remove the target task object due to the non-target task object being removed by mistake.
For example, the current task destination is dining table a, the corresponding target objects are first and second food items, the subsequent task destination is dining table B, and the corresponding target objects are third and fourth food items, then the target task objects are first and second food items, and the non-target task objects are third and fourth food items. If it is determined that the user at the dining table a takes away the third meal or the fourth meal, it is indicated that the user has mistakenly taken the non-target task object, and at this time, a prompt such as voice or screen display may be performed through the robot, for example, voice broadcast of "take wrong thing please put back to home", "take the object at the xx position", etc., so as to prompt the user to put back the non-target task object taking out the wrong thing and take away the target task object.
In addition to comparing the image areas related to the target objects related to the task destinations, the image areas related to the target objects unrelated to the task destinations are compared, so that the reason that the first state change information does not meet the preset state change condition is determined later, and the execution of tasks of non-current task destinations is prevented from being influenced, and therefore the comprehensiveness of image processing and the effectiveness of task execution of the object-delivering robot are improved.
When the time point is reached, the image acquisition is carried out on the target area through the image acquisition device, so that a third state image of the target area is obtained; and extracting a third sub-image corresponding to the target object from the third state image according to the region position so as to update the first sub-image through the third sub-image. Updating the first sub-image into an image acquired at the arrival time point, so as to compare the sub-image acquired at the arrival time point with the sub-image acquired at the arrival time point, and ensure that the acquisition environments of the sub-images are the same. The situation that the light conditions are possibly changed due to the fact that the image acquisition environments are different from the current time point to the arrival time point is avoided, and the accuracy of an image processing result is affected; the pertinence and the accuracy of the subsequent image processing are improved.
Example III
Fig. 3 is a schematic structural diagram of an image processing apparatus according to a third embodiment of the present invention. The device can be realized by hardware and/or software, and the image processing method provided by any embodiment of the invention can be executed and has the corresponding functional modules and beneficial effects of the execution method. As shown in fig. 3, the apparatus includes:
The reference background image updating and acquiring module 310 is configured to acquire, by using an image acquisition device, images of a target area that does not include a target object every a first preset duration, and obtain and update, according to a first preset rule, a reference background image of the target area;
The target object identifying module 320 is configured to acquire an image of the target area through the image acquirer, obtain a first state image of the target area at a current time point, and identify each target object according to the first state image and the reference background image;
A first sub-image extraction module 330, configured to obtain, according to the recognition result, a region position of each of the target objects in the target region, and extract, according to the region position, a first sub-image of the corresponding target object from the first state image;
a second sub-image extraction module 340, configured to acquire, by using the image acquirer, an image of the target area, obtain a second status image of the target area at a next time point, and extract, according to an area position of each target object in the target area, a second sub-image of a corresponding target object from the second status image;
The first state change information determining module 350 is configured to determine, according to the first sub-image and the second sub-image, first state change information of each target object at the location of the area.
According to the technical scheme provided by the embodiment, the state change of each target object is determined by comparing the sub-images at different time points at the same position, so that the problem that when the change of the target object is small, for example, ten target objects are shared, only one target object changes, if the overall similarity of images of different areas is directly compared, whether the state of the target object changes or not is judged, and the state change of the target object cannot be accurately identified due to the small change range is solved.
And because the images of the same area are collected at different times, the collected images have obvious differences, such as huge differences of pixel gray values, differences of shadow shielding areas of surrounding objects irradiated by the light sources, differences in image color gamut and the like, due to the difference of illumination intensity, illumination color, incident angle of the light sources, shadow shielding areas and other illumination conditions. If the overall similarity of the images of different areas is directly compared, whether the state of the target object is changed or not is judged, the image comparison result of the target object with unchanged actual state possibly caused by the image difference caused by illumination change is low in similarity, and the accuracy of image processing is affected. Therefore, the latest reference background image of the target area is obtained, each target object is identified according to the first state image and the reference background image, the problem that the target object identification is affected due to the fact that light rays during target object identification are different from light rays during reference background image acquisition is avoided, and accuracy of target object identification is improved. And the illumination condition is changed, but the position information corresponding to the target object is not changed, sub-images at different time points at the same position are compared later, the state change of each target object is determined, and the state change identification accuracy of the target object in the area is improved.
On the basis of the above technical solutions, optionally, the reference background image update acquiring module includes:
The background image comparison unit is used for acquiring a current background image of the target area and comparing the current background image with the reference background image;
And the background image updating unit is used for updating the current background image into a new reference background image if the similarity between the current background image and the reference background image is greater than or equal to a preset threshold value.
Based on the above technical solutions, optionally, the apparatus is applied to an object delivery robot including an image collector, where the next time point is an arrival time point when the object delivery robot arrives at a preset task destination, and the apparatus further includes:
The third state image acquisition module is used for acquiring the image of the target area through the image acquisition device when the arrival time point is reached after the first sub-image extraction module, so as to obtain a third state image of the target area;
And the third sub-image extraction module is used for extracting a third sub-image of the corresponding target object from the third state image according to the region position so as to update the first sub-image through the third sub-image.
On the basis of the above technical solutions, optionally, the target object identification module includes:
the current state image acquisition unit is used for acquiring the image of the target area through the image acquisition unit to obtain a current state image of the target area;
a target object determining unit configured to determine whether the target object exists in the target area according to the reference background image and the current state image;
and the first state image determining unit is used for determining the current state image as the first state image if the target object determining unit determines that the target object exists.
On the basis of the above technical solutions, optionally, the first sub-image extraction module includes:
A first foreground image obtaining unit, configured to obtain a first foreground image of the target area according to the reference background image and the first state image;
a first sub-image extraction unit, configured to extract the first sub-image from the first foreground image according to the region position;
and the second sub-image extraction module includes:
a second foreground image obtaining unit, configured to obtain a second foreground image of the target area according to the reference background image and the second state image;
and a second sub-image extraction unit, configured to extract the second sub-image from the second foreground image according to the region position of each target object in the target region.
On the basis of the above technical solutions, optionally, the apparatus further includes:
the state change confirmation information acquisition module is used for responding to the state confirmation operation of the user on each target object and acquiring the state change confirmation information of each target object;
the information consistency judging module is used for judging whether the state change confirmation information is consistent with the first state change information or not;
And the third state image acquisition module is used for acquiring the image of the target area through the image acquisition device to obtain a fourth state image of the target area at the state change confirmation time point if the information consistency judgment module judges that the information consistency judgment module is not the same.
On the basis of the above technical solutions, optionally, the first state change information determining module includes:
a similarity comparison result determining unit configured to determine a similarity comparison result between the first sub-image and the second sub-image; the similarity comparison result is determined according to the brightness comparison result, the contrast comparison result and the structure comparison result of the image color channel;
And the target object state change determining unit is used for determining whether the state of the target object at the area position is changed according to whether the similarity comparison result is larger than a preset threshold value.
On the basis of the above technical solutions, optionally, the application to an object-delivering robot including an image collector, where the next time point is an arrival time point when the object-delivering robot arrives at a preset task destination, the second sub-image extraction module includes:
And the second state image acquisition updating unit is used for acquiring images of the target area through the image acquisition device every second preset time from the arrival time point to obtain and update the second state image according to a second preset rule.
Based on the above technical solutions, optionally, when applied to an object delivery robot including an image collector, the first state change information determining module includes:
The target task object set determining unit is used for determining a target task object set corresponding to the current task destination from all target objects according to the association relation between the task destination preset by the robot and the target objects;
A first sub-image set determining unit, configured to determine a first sub-image set corresponding to a target task object from the first sub-images according to the target task object set, and determine a second sub-image set corresponding to the target task object from the second sub-images;
and the first state change information determining unit is used for determining first state change information of each target task object in the target task object set at the region position according to the first sub-image set and the second sub-image set.
On the basis of the above technical solutions, optionally, the apparatus further includes:
the non-target task object set determining unit is used for determining a non-target task object set from all target objects according to the association relation between the task destination preset by the robot and the target objects if the first state change information does not meet the preset state change condition after the first state change information determining unit;
A second sub-image set determining unit, configured to determine a third sub-image set corresponding to a non-target task object from the first sub-image according to the non-target task object set, and determine a fourth sub-image set corresponding to a non-target task object from the second sub-image;
And the second state change information determining unit is used for determining second state change information of each non-target task object in the non-target task object set at the position of the area according to the third sub-image set and the fourth sub-image set.
Based on the above technical solutions, the method is optionally applied to an object-delivering robot including an image collector, where the current time point is a starting time point of the object-delivering robot for delivering the target object, and the next time point is an arrival time point of the object-delivering robot for reaching a preset task destination;
correspondingly, the device further comprises:
The pause state image acquisition module is used for acquiring images of the object placing area through the image acquisition device if a pause event is detected between the starting time point and the reaching time point before the second sub-image extraction module, so as to obtain pause state images of the object placing area;
A pause sub-image extraction module, configured to extract a pause sub-image of a corresponding target object from the pause state image according to a region position of each target object in the target region;
The image acquisition module of the starting state after pause is used for acquiring an image of a target area through the image acquisition device if a starting event after pause is detected between the starting time point and the reaching time point, so as to obtain a starting state image of the target area after pause;
the post-pause start sub-image extraction module is used for extracting a post-pause start sub-image of a corresponding target object from the post-pause start state image according to the region position of each target object in the target region;
A target object state change determining module, configured to determine whether a state of the target object at the region position is changed according to the pause sub-image and the start sub-image after pause;
And the acquisition operation determining and executing module is used for determining to execute the operation of acquiring the image of the target area through the image acquisition device to obtain a second state image of the target area at the next time point if the state change determining module of the target object determines no.
Example IV
Fig. 4 is a schematic structural diagram of a robot according to a fourth embodiment of the present invention, and as shown in fig. 4, the robot includes a processor 40, a memory 41, an input device 42 and an output device 43; the number of processors 40 in the robot may be one or more, one processor 40 being taken as an example in fig. 4; the processor 40, the memory 41, the input means 42 and the output means 43 in the robot may be connected by a bus or other means, in fig. 4 by way of example. The robot further comprises an image collector (not shown in the figures), wherein the image collector is adapted to collect images.
The memory 41 is a computer-readable storage medium that can be used to store a software program, a computer-executable program, and modules, such as program instructions/modules corresponding to the image processing method in the embodiment of the present invention. The processor 40 executes various functional applications of the robot and data processing, that is, implements the above-described image processing method, by running software programs, instructions, and modules stored in the memory 41.
The memory 41 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 41 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 41 may further include memory remotely located relative to processor 40, which may be connected to the robot via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Example five
A fifth embodiment of the present invention also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing an image processing method, the method comprising:
Image acquisition is carried out on a target area which does not comprise a target object through an image acquisition device every first preset time length to obtain and update a reference background image of the target area according to a first preset rule;
acquiring an image of the target area through the image acquisition device to obtain a first state image of the target area at the current time point, and identifying each target object according to the first state image and the reference background image;
obtaining the region position of each target object in the target region according to the identification result, and extracting a first sub-image of the corresponding target object from the first state image according to the region position;
Acquiring an image of the target area through the image acquisition device to obtain a second state image of the target area at the next time point, and extracting a second sub-image of the corresponding target object from the second state image according to the area position of each target object in the target area;
And determining first state change information of each target object at the region position according to the first sub-image and the second sub-image.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present invention is not limited to the method operations described above, and may also perform the related operations in the image processing method provided in any embodiment of the present invention.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-only memory (ROM), a random access memory (RandomAccessMemory, RAM), a FLASH memory (FLASH), a hard disk or an optical disk of a computer, etc., and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present invention.
It should be noted that, in the above-described embodiment of the image processing apparatus, each unit and module included is divided according to the functional logic only, but is not limited to the above-described division, as long as the corresponding functions can be realized; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (10)

1. An image processing method, applied to an object delivery robot including an image collector, comprising:
image acquisition is carried out on a target area which does not comprise a target object through the image acquisition device every first preset time length to obtain and update a reference background image of the target area according to a first preset rule;
Acquiring an image of the target area through the image acquisition device to obtain a first state image of the target area at the current time point, and identifying each target object according to the first state image and the reference background image; wherein the first status image is an image in which the target object exists;
obtaining the region position of each target object in the target region according to the identification result, and extracting a first sub-image corresponding to the target object from the first state image according to the region position;
Acquiring an image of the target area through the image acquisition device to obtain a second state image of the target area at the next time point, and extracting a second sub-image corresponding to the target object from the second state image according to the area position of each target object in the target area;
Determining a target task object set corresponding to the current task destination from all target objects according to the association relation between the preset task destination and the target objects;
Determining a first sub-image set corresponding to the target task object from the first sub-image according to the target task object set, and determining a second sub-image set corresponding to the target task object from the second sub-image;
Determining first state change information of each target task object in the target task object set at the region position according to the first sub-image set and the second sub-image set; the first state change information is used for showing whether the object is taken away in the position of the area;
If the first state change information does not meet the preset state change condition, determining a non-target task object set from all the target objects according to the association relation between the preset task destination and the target objects;
Determining a third sub-image set corresponding to the non-target task object from the first sub-image according to the non-target task object set, and determining a fourth sub-image set corresponding to the non-target task object from the second sub-image;
And determining second state change information of each non-target task object in the non-target task object set at the region position according to the third sub-image set and the fourth sub-image set.
2. The method of claim 1, wherein updating the reference background image of the target area according to the first preset rule comprises:
acquiring a current background image of the target area, and comparing the current background image with the reference background image;
And if the similarity between the current background image and the reference background image is greater than or equal to a preset threshold value, updating the current background image into a new reference background image.
3. The method according to claim 1, wherein the next time point is an arrival time point at which the object-delivering robot arrives at a preset task destination, and further comprising, after the extracting the first sub-image corresponding to the target object:
When the time point is reached, the image acquisition is carried out on the target area through the image acquisition device, and a third state image of the target area is obtained;
And extracting a third sub-image corresponding to the target object from the third state image according to the region position so as to update the first sub-image through the third sub-image.
4. The method according to claim 1, wherein the acquiring, by the image acquirer, the first state image of the target area at the current point in time, includes:
Acquiring an image of the target area through the image acquisition device to obtain a current state image of the target area;
Determining whether the target object exists in the target area according to the reference background image and the current state image;
If so, the current state image is determined to be the first state image.
5. The method of claim 1, wherein the extracting a first sub-image corresponding to the target object from the first status image according to the region position comprises:
Obtaining a first foreground image of the target area according to the reference background image and the first state image;
extracting the first sub-image from the first foreground image according to the region position;
And extracting a second sub-image corresponding to the target object from the second state image according to the region position of each target object in the target region, including:
obtaining a second foreground image of the target area according to the reference background image and the second state image;
And extracting the second sub-image from the second foreground image according to the region position of each target object in the target region.
6. The method of claim 1, further comprising, after said determining the first state change information for each of the target task objects in the set of target task objects at the region location:
Responding to the state confirmation operation of the user on each target object, and acquiring state change confirmation information of each target object;
Judging whether the state change confirmation information is consistent with the first state change information;
If not, acquiring the image of the target area through the image acquisition device to obtain a fourth state image of the target area at the state change confirmation time point.
7. The method according to claim 1, wherein the next time point is an arrival time point when the object delivery robot arrives at a preset task destination, the image capturing, by the image capturing device, the target area, and obtaining a second state image of the target area at the next time point includes:
and from the arrival time point, carrying out image acquisition on the target area through the image acquisition device every second preset time, and obtaining and updating the second state image according to a second preset rule.
8. The method according to claim 1, wherein the current time point is a start time point of the object delivery robot delivering the target object, and the next time point is an arrival time point of the object delivery robot reaching a preset task destination;
Correspondingly, before the image acquisition is performed on the target area by the image acquisition device to obtain the second state image of the target area at the next time point, the method further comprises:
if a pause event is detected between the starting time point and the arrival time point, acquiring an image of an object placing area through the image acquisition device, and obtaining a pause state image of the object placing area;
extracting a pause sub-image corresponding to the target object from the pause state image according to the region position of each target object in the target region;
If a post-pause start event is detected between the start time point and the arrival time point, the image acquisition is carried out on the target area through the image acquisition device, and a post-pause start state image of the target area is obtained;
Extracting a post-pause start sub-image corresponding to the target object from the post-pause start state image according to the region position of each target object in the target region;
determining whether the state of the target object at the area position is changed or not according to the pause sub-image and the start sub-image after pause;
If not, determining to execute the operation of acquiring the image of the target area through the image acquisition device to obtain a second state image of the target area at the next time point.
9. A robot, comprising:
the image collector is used for collecting images;
One or more processors;
Storage means for storing one or more programs,
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the image processing method of any of claims 1-8.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the image processing method according to any one of claims 1-8.
CN202110909107.8A 2021-08-09 2021-08-09 Image processing method, robot and medium Active CN113627323B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110909107.8A CN113627323B (en) 2021-08-09 2021-08-09 Image processing method, robot and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110909107.8A CN113627323B (en) 2021-08-09 2021-08-09 Image processing method, robot and medium

Publications (2)

Publication Number Publication Date
CN113627323A CN113627323A (en) 2021-11-09
CN113627323B true CN113627323B (en) 2024-05-07

Family

ID=78383660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110909107.8A Active CN113627323B (en) 2021-08-09 2021-08-09 Image processing method, robot and medium

Country Status (1)

Country Link
CN (1) CN113627323B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114290337B (en) * 2022-01-28 2024-03-26 北京云迹科技股份有限公司 Robot control method and device, electronic equipment and storage medium
CN116228698B (en) * 2023-02-20 2023-10-27 北京鹰之眼智能健康科技有限公司 Filler state detection method based on image processing

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106507040A (en) * 2016-10-26 2017-03-15 浙江宇视科技有限公司 The method and device of target monitoring
CN109963114A (en) * 2017-12-14 2019-07-02 湖南餐启科技有限公司 One kind is had dinner detection device, method, server and system
CN110989600A (en) * 2019-12-10 2020-04-10 北京云迹科技有限公司 Delivery method and device
CN111860070A (en) * 2019-04-30 2020-10-30 百度时代网络技术(北京)有限公司 Method and device for identifying changed object
CN112613456A (en) * 2020-12-29 2021-04-06 四川中科朗星光电科技有限公司 Small target detection method based on multi-frame differential image accumulation
CN112613358A (en) * 2020-12-08 2021-04-06 浙江三维万易联科技有限公司 Article identification method, article identification device, storage medium, and electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6090786B2 (en) * 2013-05-09 2017-03-08 国立大学法人 東京大学 Background difference extraction apparatus and background difference extraction method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106507040A (en) * 2016-10-26 2017-03-15 浙江宇视科技有限公司 The method and device of target monitoring
CN109963114A (en) * 2017-12-14 2019-07-02 湖南餐启科技有限公司 One kind is had dinner detection device, method, server and system
CN111860070A (en) * 2019-04-30 2020-10-30 百度时代网络技术(北京)有限公司 Method and device for identifying changed object
CN110989600A (en) * 2019-12-10 2020-04-10 北京云迹科技有限公司 Delivery method and device
CN112613358A (en) * 2020-12-08 2021-04-06 浙江三维万易联科技有限公司 Article identification method, article identification device, storage medium, and electronic device
CN112613456A (en) * 2020-12-29 2021-04-06 四川中科朗星光电科技有限公司 Small target detection method based on multi-frame differential image accumulation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于长短期记忆网络的挖掘机器人视觉跟踪系统设计;丁盼 等;《机械制造与自动化》;20191231;第1-5页 *

Also Published As

Publication number Publication date
CN113627323A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
CN113627323B (en) Image processing method, robot and medium
CN113610004B (en) Image processing method, robot and medium
US10520329B2 (en) Method for providing parking location information of vehicle and electronic device thereof
WO2020156034A1 (en) Fingerprint input method and related device
CN102859554B (en) Collating device
CN103279383B (en) Photographing method with two-dimensional bar code scanning function and photographing system with two-dimensional bar code scanning function
CN112819722A (en) Infrared image face exposure method, device, equipment and storage medium
CN103870824A (en) Method and device for capturing face in face detecting and tracking process
CN113194024B (en) Information display method and device and electronic equipment
CN112712498A (en) Vehicle damage assessment method and device executed by mobile terminal, mobile terminal and medium
CN111656313A (en) Screen display switching method, display device and movable platform
CN112070053A (en) Background image self-updating method, device, equipment and storage medium
CN111183431A (en) Fingerprint identification method and terminal equipment
CN110796014A (en) Garbage throwing habit analysis method, system and device and storage medium
US11836981B2 (en) Method for assisting real-time monitoring of at least one person on sequences of images
US20210319251A1 (en) Method for processing multimodal images, apparatus, device and storage medium
CN110717429B (en) Information processing method, electronic equipment and computer readable storage medium
CN111667419A (en) Moving target ghost eliminating method and system based on Vibe algorithm
CN113469141B (en) Article conveying method, robot and medium
CN111310510A (en) Method and device for displaying face recognition result and storage medium
CN113622144B (en) Laundry control method, laundry control system, washing machine and computer readable storage medium
CN113622145B (en) Laundry control method, laundry control system, washing machine and computer readable storage medium
CN111368726B (en) Construction site operation face personnel number statistics method, system, storage medium and device
CN113989743A (en) Garbage overflow detection method, detection equipment and system
JP2014209326A (en) Image processing apparatus, image processing method, and image processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant