CN115091461A - Method, device and system for detecting articles in robot cabin - Google Patents

Method, device and system for detecting articles in robot cabin Download PDF

Info

Publication number
CN115091461A
CN115091461A CN202210828487.7A CN202210828487A CN115091461A CN 115091461 A CN115091461 A CN 115091461A CN 202210828487 A CN202210828487 A CN 202210828487A CN 115091461 A CN115091461 A CN 115091461A
Authority
CN
China
Prior art keywords
cabin
image
images
model
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210828487.7A
Other languages
Chinese (zh)
Inventor
兰婷婷
张瑞琪
曾祥永
支涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Yunji Intelligent Technology Co Ltd
Original Assignee
Henan Yunji Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Yunji Intelligent Technology Co Ltd filed Critical Henan Yunji Intelligent Technology Co Ltd
Priority to CN202210828487.7A priority Critical patent/CN115091461A/en
Publication of CN115091461A publication Critical patent/CN115091461A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to the technical field of robots, and provides a method, a device and a system for detecting articles in a robot cabin. The method comprises the following steps: acquiring an in-cabin image of a storage cabin of the delivery robot, wherein the in-cabin image is shot by a camera arranged in the storage cabin; inputting the images in the cabin into a preset cabin judgment model to obtain a judgment result; and responding to the judgment result to indicate that the storage compartment is not an empty compartment, and inputting the images in the compartment into a preset article detection model to obtain a detection result. By adopting the technical means, the technical problems that the articles in the storage cabin of the article-sending robot cannot be detected and the article data statistics cannot be carried out in the prior art can be solved.

Description

Method, device and system for detecting articles in robot cabin
Technical Field
The disclosure relates to the technical field of robots, in particular to a method, a device and a system for detecting articles in a robot cabin.
Background
In a hotel application scenario, a hotel manager uses a delivery robot to deliver items to a tenant, but the hotel manager does not know the category of items delivered by the delivery robot.
The goods delivered by the delivery robot are generally purchased by the tourists from the outside of the hotel in a express or takeaway mode, and the reason why the tourists purchase from the outside of the hotel is probably because the goods provided by the hotel cannot meet the demands of the tourists. In the correlation technique, send thing robot unable automatic identification to put the object in the thing cabin, the hotel management side can not make statistics of and analyze the data of the article that send thing robot to can not effectively know room customer's article demand and purchasing tendency, just also can not carry out accurate stock and accurate sale.
How to detect the articles in the storage compartment of the article-conveying robot and perform article data statistics is a technical problem which needs to be solved urgently at present.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide a method and an apparatus for detecting an article in a robot cabin, an electronic device, and a computer-readable storage medium, so as to solve the problem that an article in a storage cabin of a robot cannot be detected and statistics of article data is performed in the prior art.
In a first aspect of the disclosed embodiments, a method for detecting an article in a robot cabin is provided, including: acquiring an in-cabin image of an object placing cabin of the object sending robot, wherein the in-cabin image is shot by a camera arranged in the object placing cabin; inputting the images in the cabin into a preset cabin judgment model to obtain a judgment result; and responding to the judgment result representing that the storage compartment is not an empty compartment, and inputting the images in the compartment into a preset article detection model to obtain a detection result.
In a second aspect of the embodiments of the present disclosure, there is provided a robot under-deck item detection apparatus, including: the image acquisition module is used for acquiring an in-cabin image of a storage cabin of the delivery robot, and the in-cabin image is shot by a camera arranged in the storage cabin; the judging module is used for inputting the images in the cabin into a preset cabin judging model to obtain a judging result; and the detection module is used for responding to the representation of the air judgment result that the storage cabin is not an empty cabin, and inputting the images in the cabin into a preset article detection model to obtain a detection result.
In a third aspect of the disclosed embodiments, there is provided a robot under-deck item detection system, including: the camera is positioned in the object placing cabin of the object sending robot and used for shooting images in the object placing cabin; and the article detection equipment is used for acquiring an intra-cabin image of the object placing cabin of the object sending robot shot by the camera, inputting the intra-cabin image into a preset in-cabin emptying model to obtain an emptying result, responding to the emptying result to indicate that the object placing cabin is not an empty cabin, and inputting the intra-cabin image into the preset article detection model to obtain a detection result.
In a fourth aspect of the embodiments of the present disclosure, there is provided an electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
In a fifth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor implements the steps of the above method.
Compared with the prior art, the embodiment of the disclosure has the following beneficial effects: the in-cabin air judgment is carried out by adopting the in-cabin air judgment model according to the in-cabin images of the object containing cabin of the object sending robot, the object detection is carried out by adopting the object detection model when the object containing cabin is determined not to be the empty cabin, the name of the object sent by the object sending robot is obtained, the statistical analysis can be carried out according to the object detection result, the object demand of a tenant is known, and the commodity is accurately provided and the service quality is improved.
Drawings
To more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive efforts.
FIG. 1 is a scenario diagram of an application scenario of an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a method for detecting an object in a robot cabin according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another method for detecting an object in a robot cabin according to an embodiment of the disclosure;
fig. 4 is a schematic structural diagram of an object detection device in a robot cabin according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
A method and an apparatus for detecting an object in a robot chamber according to an embodiment of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is a scene schematic diagram of an application scenario of an embodiment of the present disclosure. The application scenario may include terminal devices 101, 102, and 103, server 104, and network 105.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When terminal devices 101, 102, and 103 are hardware, they may be various electronic devices having a display screen and supporting communication with server 104, including but not limited to smart phones, robots, laptop portable computers, desktop computers, and the like (e.g., 102 may be a robot); when the terminal apparatuses 101, 102, and 103 are software, they can be installed in the electronic apparatus as above. The terminal devices 101, 102, and 103 may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not limited by the embodiments of the present disclosure. Further, various applications, such as a data processing application, an instant messaging tool, social platform software, a search type application, a shopping type application, and the like, may be installed on the terminal devices 101, 102, and 103.
The server 104 may be a server providing various services, for example, a backend server receiving a request sent by a terminal device establishing a communication connection with the server, and the backend server may receive and analyze the request sent by the terminal device and generate a processing result. The server 104 may be a server, may also be a server cluster composed of a plurality of servers, or may also be a cloud computing service center, which is not limited in this disclosure.
The server 104 may be hardware or software. When the server 104 is hardware, it may be various electronic devices that provide various services to the terminal devices 101, 102, and 103. When the server 104 is software, it may be multiple software or software modules providing various services for the terminal devices 101, 102, and 103, or may be a single software or software module providing various services for the terminal devices 101, 102, and 103, which is not limited by the embodiment of the present disclosure.
The network 105 may be a wired network connected by a coaxial cable, a twisted pair and an optical fiber, or may be a wireless network that can interconnect various Communication devices without wiring, for example, Bluetooth (Bluetooth), Near Field Communication (NFC), Infrared (Infrared), and the like, which is not limited in the embodiment of the present disclosure.
The target user can establish a communication connection with the server 104 via the network 105 through the terminal devices 101, 102, and 103 to receive or transmit information or the like. It should be noted that the specific types, numbers and combinations of the terminal devices 101, 102 and 103, the server 104 and the network 105 may be adjusted according to the actual requirements of the application scenario, and the embodiment of the present disclosure does not limit this.
Fig. 2 is a schematic flowchart of a method for detecting an article in a robot cabin according to an embodiment of the present disclosure. The method for detecting the object in the robot cabin of fig. 2 may be performed by the terminal device or the server of fig. 1. As shown in fig. 2, the method for detecting the object in the robot cabin comprises the following steps:
step S201, an in-cabin image of a storage cabin of the delivery robot is acquired, and the in-cabin image is shot by a camera arranged in the storage cabin.
Specifically, the camera may be disposed at a center position of a ceiling inside the storage compartment, and is not limited thereto.
And S202, inputting the images in the cabin into a preset cabin empty judgment model to obtain an empty judgment result.
Specifically, the air judgment result may represent that the storage compartment is an empty compartment, that the storage compartment is not an empty compartment, or that the image is abnormal.
And S203, responding to the judgment result that the object placing cabin is not an empty cabin, and inputting the images in the cabin into a preset article detection model to obtain a detection result.
Specifically, the result of the determination may be a probability of a bounding box of the article in the storage compartment and a name of the article corresponding to the bounding box.
According to the technical scheme of the embodiment of the disclosure, the camera is installed in the object placing cabin of the robot, the in-cabin image acquired by the camera is subjected to in-cabin vacancy judgment to obtain a vacancy judgment result, the in-cabin image is subjected to article detection when the object placing cabin is not an empty cabin according to the vacancy judgment result, article information in the object placing cabin can be obtained, and the article information is further counted to provide data support for a hotel management party and improve hotel service quality.
The cabin interior judgment model is a classification model based on computer vision, and can judge whether an object exists in the robot cabin or not according to the characteristics of a current image. In the embodiment of the disclosure, the in-cabin emptying model may be obtained by training the first deep learning model by using the sample data set. The first deep learning model may be, but is not limited to, a neural network model. The sample data set of the training in-cabin emptying model in the embodiment of the present disclosure may be a first sample in-cabin image data set.
When the in-cabin vacancy judging model is trained, an in-cabin image data set of a first sample can be constructed firstly, the in-cabin image data set of the first sample is acquired through a camera inside a storage cabin, data labeling is carried out on images in the image data set in the first sample cabin according to sample characteristics, the images are divided into three categories including objects, non-objects and abnormal images, and finally model training is carried out on the labeled in-cabin image data set of the first sample cabin on a deep learning frame, so that the in-cabin vacancy judging model can be obtained.
Specifically, when the in-cabin air judgment model is trained, an image data set in a first sample cabin can be obtained, the first deep learning model is trained according to images in the first sample cabin and corresponding empty cabin identification labels thereof, and the trained in-cabin air judgment model is obtained, wherein the image data set in the first sample cabin comprises images in the first sample cabin and corresponding empty cabin identification labels thereof, and the empty cabin identification labels are training labels for marking three state types, namely, empty cabin and unrecognizable empty cabin, of the object cabin corresponding to the images in the first sample cabin.
The cabin interior judgment model can be understood as an empty cabin identification model, and an empty cabin identification task of the empty cabin identification model is a process of carrying out classification identification on images by taking whether the images are empty cabins or not as an identification purpose. In the model application stage, the cabin interior judgment model can judge whether the object containing cabin in the current cabin interior image is an empty cabin or not by classifying and identifying the current cabin interior image.
The in-cabin empty judging model is obtained by training the first deep learning model by adopting the sample in-cabin image, and empty-cabin identification can be carried out on the in-cabin image based on the computer vision technology, so that the automation degree and accuracy of empty-cabin judgment are improved.
The object detection model is a computer vision-based detection model that can detect the position of an object and the name of the object existing in an image.
In the embodiment of the present disclosure, the article detection model may be obtained by training the second deep learning model by using the sample data set. The second deep learning model may be, but is not limited to, a neural network model. The sample data set for training the item detection model in embodiments of the present disclosure may be the second sample intra-cabin image data set.
When the article detection model is trained, firstly, a camera in the storage cabin is used for collecting sample images to construct an image data set in a first sample cabin, then, the images in the image data set in the first sample cabin are subjected to sample marking, the sample marking method is to mark the positions and names of objects in the images by using labels, and finally, the image data set in the first sample cabin is used for training on a deep learning frame to obtain the article detection model.
Specifically, when the article detection model is trained, an image dataset in a second sample cabin can be obtained, and the second deep learning model is trained according to an image in the second sample cabin and an article label corresponding to the image, so as to obtain the trained article detection model, wherein the image dataset in the second sample cabin comprises an image in the sample cabin and an article label corresponding to the image in the sample cabin, and the article label is a training label for labeling the position and name of an article in the image in the second sample cabin.
The second deep learning model is trained by adopting the image in the sample cabin to obtain an article detection model, and article detection can be carried out on the image in the cabin based on a computer vision technology, so that the automation degree and the accuracy of article detection are improved.
In the disclosed embodiment, the first deep learning model may employ a MobileNet framework, and is not limited thereto. The MobileNet framework is a lightweight deep learning model and is suitable for embedded and mobile terminal equipment. The basic unit of MobileNet is depth-level separable convolution, which mainly consists of the operations of two parts, depth convolution and node convolution. In step S102, the depth-level separable convolution unit of the cabin interior emptying model may be used to perform a convolution operation on the cabin interior image.
In the disclosed embodiment, the second deep learning model may be based on the YOLO framework, and is not limited thereto. In the disclosed embodiment, the network width and depth in the YOLO framework can be changed to achieve the effect of lightweight network model. YOLO can divide an input image into mesh forms and apply image classification and positioning processing to each mesh to obtain a bounding box of a predicted object and a class probability corresponding to the bounding box. In step S103, the second deep learning model may be used to divide the images in the cabin into different grids, and apply image classification and positioning processing to each grid, so as to obtain a probability of predicting the bounding box of the object and the corresponding object name as a detection result.
The robot cabin article detection method can be executed by a processor of an embedded device, and due to the fact that resources of the embedded device are limited, the cabin emptying model and the article detection model both use a lightweight network model in the technical scheme of the embodiment of the disclosure, so that resources can be saved, and model deployment can be better carried out.
In the training process of the in-cabin air judgment model, in order to facilitate the distinguishing of the images in the sample cabin of the storage cabin in the empty cabin state and the non-empty cabin state, the spatial feature distinguishing degree of the images in the sample cabin can be enhanced through image enhancement operation. Specifically, in order to enhance the spatial feature discrimination of the image in the sample chamber, a preprocessing operation such as data enhancement may be performed on the image data set in the first sample chamber before step S102.
After the step S103, when the empty result of the N consecutive images indicates that the storage compartment is not empty, the detection result indicates that the names of the articles in the storage compartment are successfully detected, and the names of the articles in the storage compartment are consistent, the empty result and the detection result are sent to an article analysis system for analysis, where N is a natural number and N is greater than or equal to 2.
As shown in fig. 3, a method for detecting an object in a robot cabin according to an embodiment of the present disclosure includes the following steps:
step S321 starts.
Step S322, inputting the internal image of the storage compartment collected by the image collecting device 311 into the in-compartment emptying model to obtain an emptying result.
And step S323, judging whether an object exists in the storage compartment according to the air judgment result. If so, step S324 is performed, otherwise, step S321 is performed.
And step S324, inputting the internal image of the storage compartment into the article detection model to obtain a detection result.
Step S325, the detection result is sent to the data analysis system.
Step S326 ends.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
According to the method for detecting the articles in the robot cabin, the images in the cabin of the object containing cabin of the object sending robot are subjected to in-cabin emptying judgment by adopting the in-cabin emptying model, and article detection is carried out by adopting the article detection model when the object containing cabin is determined not to be empty, so that the names of the articles sent by the object sending robot are obtained, statistical analysis can be carried out according to article detection results, the article requirements of tenants are known, and the goods are accurately provided and the service quality is improved.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. The following description of the device for detecting objects in the robot cabin and the above description of the method for detecting objects in the robot cabin are referred to in correspondence with each other. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 4 is a schematic diagram of an apparatus for detecting an object in a robot cabin according to an embodiment of the present disclosure. As shown in fig. 4, the robot cabin interior article detecting device includes:
the image acquisition module 401 may be configured to acquire an in-cabin image of the object storage cabin of the object delivery robot, where the in-cabin image is captured by a camera disposed inside the object storage cabin.
The judging module 402 may be configured to input the cabin image into a preset cabin interior judging model to obtain a judging result.
The detection module 403 may be configured to, in response to the indication that the storage compartment is not an empty compartment as the result of the emptying, input the images in the storage compartment into a preset article detection model to obtain a detection result.
According to the technical scheme of the embodiment of the disclosure, the camera is installed in the object placing cabin of the robot, the in-cabin air judgment is carried out on the in-cabin images collected by the camera to obtain the air judgment result, the in-cabin images are detected when the object placing cabin is not empty according to the representation of the air judgment result, the information of the articles in the object placing cabin can be obtained, the information of the articles can be counted further, the data support of supply of a hotel manager can be provided, and the service quality of the hotel is improved.
In an embodiment of the present disclosure, the device for detecting objects in a robot cabin may further include a first training module configured to: acquiring an image data set in a first sample cabin, wherein the image data set in the first sample cabin comprises images in the first sample cabin and corresponding empty cabin identification labels thereof, and the empty cabin identification labels are training labels for marking three state types, namely empty cabin, non-empty cabin and unrecognizable empty cabin, of storage cabins corresponding to the images in the first sample cabin; and training the first deep learning model according to the images in the first sample cabin and the corresponding empty cabin identification labels thereof to obtain a trained cabin emptying judgment model.
In an embodiment of the present disclosure, the robot cabin object detecting device may further include a data enhancement module for performing data enhancement on the image data set in the first sample cabin.
In an embodiment of the present disclosure, the device for detecting an object in a robot cabin may further include a second training module, configured to: acquiring an image data set in a second sample cabin, wherein the image data set in the second sample cabin comprises images in the sample cabin and corresponding article labels thereof, and the article labels are training labels for labeling the positions and names of articles in the images in the second sample cabin; and training the second deep learning model according to the second sample in-cabin image and the corresponding article label to obtain a trained article detection model.
In this disclosure, the device for detecting the articles in the robot cabin may further include an analysis module, configured to send the empty result and the detection result to an article analysis system for analysis when the empty result of the N consecutive images indicates that the object containing cabin is not an empty cabin, the detection result indicates that the name of the articles in the object containing cabin is successfully detected, and the names of the articles in the object containing cabin are consistent.
In the embodiment of the disclosure, the empty judgment module can be further used for performing convolution operation on the images in the cabin by adopting the depth-level separable convolution unit of the cabin empty judgment module; the detection module can also be used for dividing the images in the cabin into different grids by adopting a second deep learning model, applying image classification and positioning processing to each grid and obtaining the probability of a boundary frame of a predicted article and the article name corresponding to the boundary frame as a detection result.
For details which are not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the method for detecting an article in a robot cabin of the present disclosure for the details which are not disclosed in the embodiments of the apparatus of the present disclosure.
According to the robot in-cabin article detection device of the embodiment of the disclosure, the in-cabin air judgment is carried out by adopting the in-cabin air judgment model on the in-cabin image of the object placing cabin of the object sending robot, and the article detection is carried out by adopting the article detection model when the object placing cabin is determined not to be the empty cabin, so that the name of the article sent by the object sending robot is obtained, the statistical analysis can be carried out according to the article detection result, the article demand of a tenant is known, and the commodity is accurately provided and the service quality is improved.
The disclosed embodiment also provides a robot cabin article detection system, including: the camera is located in an object placing cabin of the object sending robot and used for shooting images in the object placing cabin. The article detection device includes the article detection device in the robot cabin in the above embodiment, and may be configured to acquire an image in the cabin of the article holding cabin of the article conveying robot, which is captured by the camera, input the image in the cabin into a preset cabin empty judgment model to obtain an empty judgment result, and input the image in the cabin into the preset article detection model to obtain a detection result in response to the empty judgment result indicating that the article holding cabin is not an empty cabin.
In the embodiment of the disclosure, the camera can be installed at the central position of the top in the object placing cabin of the object sending robot, and the visual angle of the camera is perpendicular to the horizontal plane. And when the robot receives the dispatching command, the camera is used for shooting images in the cabin, and the images in the cabin are transmitted to the robot cabin article detection system in a network communication mode.
When the robot cabin article detection system receives an cabin image from a camera, the image is firstly input into a cabin emptying model for emptying, and if an object exists, the image is further input into an article detection model for object detection. When N continuous images are identified as objects by the in-cabin emptying model and the detection results of the object detection models are consistent, transmitting detection information to the object analysis system for processing, closing the camera, and suspending the operation of the object detection system in the robot cabin, wherein N is a natural number and is not less than 2.
And after receiving the detection result from the article detection system in the robot cabin, the article analysis system performs information processing, and a user can check data such as the robot delivery information statistical chart of the current day, the delivery information statistical chart of the current month and the delivery times of a top k object in each day and each month through the article analysis system, so as to make a correct decision for improving hotel services, wherein k is a natural number.
According to the robot detection system for the objects in the cabin, the objects in the cabin of the object conveying robot are judged to be empty by adopting the in-cabin empty judging model, and the objects are detected by adopting the object detecting model when the object containing cabin is determined not to be empty, so that the names of the objects delivered by the object conveying robot are obtained, the statistical analysis can be carried out according to the object detecting result, the object requirements of tenants are known, the commodities are accurately provided, and the service quality is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure.
Fig. 5 is a schematic diagram of an electronic device 500 provided by an embodiment of the disclosure. As shown in fig. 5, the electronic apparatus 500 of this embodiment includes: a processor 501, a memory 502 and a computer program 503 stored in the memory 502 and executable on the processor 501. The steps in the various method embodiments described above are implemented when the processor 501 executes the computer program 503. Alternatively, the processor 501 implements the functions of each module/unit in each apparatus embodiment described above when executing the computer program 503.
Illustratively, the computer program 503 may be partitioned into one or more modules/units, which are stored in the memory 502 and executed by the processor 501 to accomplish the present disclosure. One or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 503 in the electronic device 500.
The electronic device 500 may be a desktop computer, a notebook, a palm computer, a cloud server, or other electronic devices. The electronic device 500 may include, but is not limited to, a processor 501 and a memory 502. Those skilled in the art will appreciate that fig. 5 is merely an example of an electronic device 500 and does not constitute a limitation of electronic device 500 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., an electronic device may also include input-output devices, network access devices, buses, etc.
The Processor 501 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 502 may be an internal storage unit of the electronic device 500, such as a hard disk or a memory of the electronic device 500. The memory 502 may also be an external storage device of the electronic device 500, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the electronic device 5. Further, the memory 502 may also include both internal storage units and external storage devices of the electronic device 500. The memory 502 is used for storing computer programs and other programs and data required by the electronic device. The memory 502 may also be used to temporarily store data that has been output or is to be output.
It should be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units and modules is only used for illustration, and in practical applications, the above function distribution may be performed by different functional units and modules as needed, that is, the internal structure of the device is divided into different functional units or modules, so as to perform all or part of the above described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the present application. For the specific working processes of the units and modules in the system, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one type of logical function, another division may be made in an actual implementation, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, the present disclosure may implement all or part of the flow of the method in the above embodiments, and may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the above methods and embodiments. The computer program may comprise computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain suitable additions or additions that may be required in accordance with legislative and patent practices within the jurisdiction, for example, in some jurisdictions, computer readable media may not include electrical carrier signals or telecommunications signals in accordance with legislative and patent practices.
The above examples are only intended to illustrate the technical solutions of the present disclosure, not to limit them; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present disclosure, and are intended to be included within the scope of the present disclosure.

Claims (10)

1. A method for detecting items in a robot cabin, the method comprising:
acquiring an intra-cabin image of a storage cabin of the delivery robot, wherein the intra-cabin image is shot by a camera arranged in the storage cabin;
inputting the images in the cabin into a preset cabin judgment model to obtain a judgment result;
and responding to the judgment result to indicate that the storage cabin is not an empty cabin, and inputting the images in the cabin into a preset article detection model to obtain a detection result.
2. The method of claim 1, wherein the training method of the cabin emptying model comprises the following steps:
acquiring an image data set in the first sample cabin, wherein the image data set in the first sample cabin comprises images in the first sample cabin and corresponding empty cabin identification labels thereof, and the empty cabin identification labels are training labels for marking three state categories, namely empty cabin, non-empty cabin and unrecognizable empty cabin, of storage cabins corresponding to the images in the first sample cabin;
and training a first deep learning model according to the images in the first sample cabin and the corresponding empty cabin identification labels thereof to obtain the trained cabin interior emptying judgment model.
3. The method of claim 2, wherein prior to the acquiring the sample capsule image dataset, the method further comprises:
performing data enhancement on the first sample intraroom image dataset.
4. The method of claim 1, wherein the method of training the item detection model comprises:
acquiring an image data set in the second sample cabin, wherein the image data set in the second sample cabin comprises images in the sample cabin and corresponding article labels thereof, and the article labels are training labels for labeling the positions and names of articles in the images in the second sample cabin;
and training a second deep learning model according to the second sample in-cabin image and the corresponding article label to obtain the trained article detection model.
5. The method of claim 1, wherein after inputting the intrabay image into a preset item detection model, the method further comprises:
and when the judgment result of the continuous N images represents that the object placing cabin is not an empty cabin, the detection result represents that the names of the objects in the object placing cabin are successfully detected, and the names of the objects in the object placing cabin are consistent, the judgment result and the detection result are sent to an object analysis system for analysis, wherein N is a natural number and is not less than 2.
6. The method of claim 4, wherein after inputting the cabin image into a preset cabin airspace model, the method further comprises: carrying out convolution operation on the images in the cabin by adopting a depth-level separable convolution unit of the cabin interior emptying model;
after inputting the intra-cabin image into a preset article detection model, the method further comprises: and dividing the images in the cabin into different grids by adopting the second deep learning model, and applying image classification and positioning processing to each grid to obtain a bounding box of the predicted articles and the probability of the article names corresponding to the bounding box as a detection result.
7. A robot under-deck object detection device, comprising:
the system comprises an image acquisition module, a storage module and a control module, wherein the image acquisition module is used for acquiring an intra-cabin image of a storage cabin of the delivery robot, and the intra-cabin image is shot by a camera arranged in the storage cabin;
the judgment module is used for inputting the images in the cabin into a preset cabin judgment model to obtain a judgment result;
and the detection module is used for responding to the judgment result to indicate that the storage cabin is not an empty cabin, and inputting the images in the cabin into a preset article detection model to obtain a detection result.
8. A robotic under-deck item detection system, comprising:
the camera is positioned in an object placing cabin of the object sending robot and used for shooting images in the object placing cabin;
and the article detection equipment is used for acquiring an intra-cabin image of a storage cabin of the object-sending robot shot by the camera, inputting the intra-cabin image into a preset intra-cabin judgment model to obtain a judgment result, responding to the judgment result to indicate that the storage cabin is not an empty cabin, and inputting the intra-cabin image into a preset article detection model to obtain a detection result.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of a method according to any one of claims 1 to 7.
CN202210828487.7A 2022-07-13 2022-07-13 Method, device and system for detecting articles in robot cabin Withdrawn CN115091461A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210828487.7A CN115091461A (en) 2022-07-13 2022-07-13 Method, device and system for detecting articles in robot cabin

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210828487.7A CN115091461A (en) 2022-07-13 2022-07-13 Method, device and system for detecting articles in robot cabin

Publications (1)

Publication Number Publication Date
CN115091461A true CN115091461A (en) 2022-09-23

Family

ID=83297584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210828487.7A Withdrawn CN115091461A (en) 2022-07-13 2022-07-13 Method, device and system for detecting articles in robot cabin

Country Status (1)

Country Link
CN (1) CN115091461A (en)

Similar Documents

Publication Publication Date Title
EP3637310A1 (en) Method and apparatus for generating vehicle damage information
US20160350708A1 (en) System and method for inventory management
CN112100425B (en) Label labeling method and device based on artificial intelligence, electronic equipment and medium
CN112613569B (en) Image recognition method, training method and device for image classification model
WO2020107951A1 (en) Image-based product checkout method and apparatus, medium, and electronic device
EP4116906A1 (en) Method for warehouse storage-location monitoring, computer device, and non-volatile storage medium
CN108491825A (en) information generating method and device
CN110516628A (en) Shelf vacant locations merchandise news acquisition methods, system, equipment and storage medium
CN113592390A (en) Warehousing digital twin method and system based on multi-sensor fusion
CN110650170A (en) Method and device for pushing information
CN115372877A (en) Unmanned aerial vehicle-based substation lightning arrester leakage current meter inspection method
JPWO2020183837A1 (en) Counting system, counting device, machine learning device, counting method, component placement method, and program
CN112861895A (en) Abnormal article detection method and device
WO2021233058A1 (en) Method for monitoring articles on shop shelf, computer and system
CN110765825A (en) Method and system for acquiring article placement state
US20200065631A1 (en) Produce Assessment System
CN115091461A (en) Method, device and system for detecting articles in robot cabin
CN110751055A (en) Intelligent manufacturing system
CN111753614A (en) Commodity shelf monitoring method and device
CN111915235B (en) Method, device, server, client and medium for identifying abnormal information
CN113988904A (en) Shop data acquisition method, device, equipment and storage medium
CN110443191A (en) The method and apparatus of article for identification
CN110956761B (en) Object processing method and system, computer system and computer readable medium
CN111325049A (en) Commodity identification method and device, electronic equipment and readable medium
CN114952862A (en) Robot cabin door closing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220923