CN114693588A - Method and device for detecting state of container - Google Patents

Method and device for detecting state of container Download PDF

Info

Publication number
CN114693588A
CN114693588A CN202011589677.5A CN202011589677A CN114693588A CN 114693588 A CN114693588 A CN 114693588A CN 202011589677 A CN202011589677 A CN 202011589677A CN 114693588 A CN114693588 A CN 114693588A
Authority
CN
China
Prior art keywords
container
confidence
confidence coefficient
state
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011589677.5A
Other languages
Chinese (zh)
Inventor
戴海能
周维
王进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rainbow Software Co ltd
Original Assignee
Rainbow Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rainbow Software Co ltd filed Critical Rainbow Software Co ltd
Priority to CN202011589677.5A priority Critical patent/CN114693588A/en
Priority to DE112021006736.2T priority patent/DE112021006736T5/en
Priority to PCT/CN2021/141762 priority patent/WO2022143562A1/en
Publication of CN114693588A publication Critical patent/CN114693588A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for detecting the state of a container. Wherein, the method comprises the following steps: acquiring a container image; processing the container image by using a first network model to obtain a container state confidence coefficient; based on the container state confidence, a container state is determined. The invention solves the technical problem that the method for detecting the state of the container in the related technology can not detect whether the container is loaded with goods or not.

Description

Method and device for detecting state of container
Technical Field
The invention relates to the field of container state detection, in particular to a container state detection method and device.
Background
At present, the phenomenon that the dregs car can have overload and dregs to scatter in the transportation, if can not in time discover, not only can influence the clean and tidy still that the road can cause a great deal of potential safety hazard.
In order to solve the problem, the first time difference and the second time difference are respectively determined by setting the first radar and the second radar in the related technology, and the leakproofness of the first side top cover and the second side top cover is determined by the microcontroller according to the first time difference and the second time difference, so that whether the slag discharging soil car top cover is effectively closed or not can be identified, and the environmental problem caused by the fact that the slag discharging soil car top cover is not effectively closed is reduced. However, in the method, only whether the top cover of the muck truck is effectively closed or not can be identified, and whether goods are loaded on a container of the muck truck or not cannot be identified; and the method uses the laser radar, so the cost is higher.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for detecting the state of a container, which are used for at least solving the technical problem that whether the container is loaded with goods cannot be detected by a method for detecting the state of the container in the related art.
According to an aspect of an embodiment of the present invention, there is provided a method for detecting a state of a container, including: acquiring a container image; processing the container image by using a first network model to obtain a container state confidence coefficient; determining a container state based on the container state confidence, wherein the container state comprises one of: the top cover of the container is closed, the goods are not loaded in the container, the quantity of the goods loaded in the container meets the preset quantity, the camera is abnormal, and the quantity of the goods loaded in the container does not meet the preset quantity.
Optionally, processing the container image by using the first network model to obtain a container state confidence coefficient includes: clipping the container image to obtain a target area image; preprocessing the target area image to obtain a processed image; and inputting the processed image into the first network model to obtain the confidence coefficient of the container state.
Optionally, the container state confidence comprises: the confidence coefficient is used for representing a first confidence coefficient of the top cover closing of the container, a second confidence coefficient of the unloaded goods in the container, a third confidence coefficient of the quantity of the goods loaded in the container, which meets a preset quantity, a fourth confidence coefficient of the camera, which is abnormal, and a fifth confidence coefficient of the quantity of the goods loaded in the container, which does not meet the preset quantity.
Optionally, the processing the container image by using the first network model to obtain a container state confidence coefficient further includes: extracting multi-frame container state confidence coefficients corresponding to the multi-frame container images; and inputting the confidence coefficient of the container state of the multiple frames into a second network model, and acquiring the confidence coefficient of the container state.
Optionally, determining the container state based on the container state confidence comprises: extracting the container state corresponding to the highest value of the first confidence coefficient, the second confidence coefficient, the third confidence coefficient, the fourth confidence coefficient and the fifth confidence coefficient as the container state; or extracting the container state with the first confidence coefficient, the second confidence coefficient, the third confidence coefficient, the fourth confidence coefficient and the fifth confidence coefficient larger than a preset threshold value as a container state; or extracting the highest value of the first confidence coefficient, the second confidence coefficient, the third confidence coefficient, the fourth confidence coefficient and the fifth confidence coefficient, wherein the container state corresponding to the confidence coefficient greater than the preset threshold value is the container state.
Optionally, determining the container state based on the container state confidence comprises: and determining the container state based on the container state confidence coefficient and a preset rule, wherein the preset rule is used for representing the priority of the container state confidence coefficient.
Optionally, determining the container state based on the container state confidence comprises: judging whether the fourth confidence coefficient is greater than a fifth preset value; if the fourth confidence coefficient is greater than the fifth preset value, determining that the container state is abnormal; if the fourth confidence coefficient is smaller than or equal to a fifth preset value, judging whether the first confidence coefficient is larger than a second preset value; if the first confidence coefficient is larger than a second preset value, determining that the container state is that the top cover of the container is closed; if the first confidence coefficient is smaller than or equal to a second preset value, judging whether the second confidence coefficient is larger than a third preset value; if the second confidence coefficient is greater than a third preset value, determining that the container state is that the container is not loaded with goods; if the second confidence coefficient is smaller than or equal to a third preset value, judging whether the third confidence coefficient is larger than a fourth preset value; if the third confidence coefficient is greater than the fourth preset value, determining that the container state is that the cargo quantity of the cargo loaded in the container meets the preset quantity; if the third confidence coefficient is smaller than or equal to a fourth preset value, judging whether the fifth confidence coefficient is larger than a sixth preset value; and if the fifth confidence coefficient is greater than the sixth preset value, determining that the container state is that the cargo quantity of the cargo loaded in the container does not meet the preset quantity.
Optionally, the pre-processing comprises at least one of: scaling and normalization.
Optionally, the method further comprises: acquiring an original container image; processing the original container image to obtain a plurality of groups of training samples; and training the initial model by utilizing a plurality of groups of training samples to obtain a first network model.
Optionally, processing the original container image to obtain a plurality of sets of training samples includes: carrying out disturbance processing on the original container images to obtain a plurality of groups of original container images; cutting a plurality of groups of original container images to obtain a plurality of groups of target area images; and preprocessing the multiple groups of target area images to obtain multiple groups of training samples.
Optionally, the perturbation processing comprises at least one of: rotation, translation, scaling, noise addition, blurring, illumination variation, channel variation.
Optionally, obtaining an image of the cargo box comprises: detecting whether the current light intensity is greater than a first preset value; if the current light intensity is larger than a first preset value, acquiring a container image by using a first camera; and if the current light intensity is less than or equal to the first preset value, acquiring the container image by using the second camera.
Optionally, before the training of the initial model by using multiple sets of training samples to obtain the first network model, the method further includes: and adding a normalization layer in the initial layer of the initial model, wherein the normalization layer is used for performing normalization processing on the multiple groups of training samples.
According to another aspect of the embodiments of the present invention, there is also provided a device for detecting a state of a cargo box, including: the acquisition module is used for acquiring a container image; the processing module is used for processing the container image by utilizing the first network model to obtain the confidence coefficient of the container state; a determination module for determining a container state based on the container state confidence, wherein the container state comprises one of: the top cover of the container is closed, the goods are not loaded in the container, the quantity of the goods loaded in the container meets the preset quantity, the camera is abnormal, and the quantity of the goods loaded in the container does not meet the preset quantity.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, where the computer-readable storage medium includes a stored program, and when the program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the above-mentioned method for detecting the state of the cargo box.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes the method for detecting the container state.
In the embodiment of the invention, after the container image is acquired, the container image is processed by using the first network model to obtain the container state confidence, and then the container state is determined based on the container state confidence, so that when the container state is detected, whether a top cover of the container is effectively closed or not can be detected, whether goods are loaded in the container or not can be detected, and the detection cost can be reduced by only using the camera to acquire the container image. Such as: after the container image is obtained, the container image can be analyzed by utilizing the first network model to obtain the confidence coefficient of the top cover sealing of the container, the confidence coefficient of the unloaded goods in the container, the confidence coefficient that the quantity of the goods loaded in the container meets the preset quantity, the confidence coefficient of the abnormal camera and the confidence coefficient that the quantity of the goods loaded in the container does not meet the preset quantity, the state of the container can be determined according to the confidence degrees of various states of the container, whether the top cover of the container is effectively closed or not can be detected, whether goods are loaded in the container or not can be detected, and can detect the camera abnormity when the obtained container image does not show the container state so as to facilitate the user to timely process the problems of the camera, meanwhile, the cost of only adopting the camera for detection is reduced compared with the cost of using the radar for detection in the comparison file, and the technical problem that whether goods are loaded in the container or not cannot be detected by a method for detecting the state of the container in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a flow chart of a method of detecting a condition of a cargo container according to an embodiment of the invention;
fig. 2 is a schematic view of a cargo box with its top cover closed according to an embodiment of the invention;
fig. 3 is a schematic view of a container with no cargo loaded therein according to an embodiment of the invention;
fig. 4 is a schematic view of a state of a cargo box loaded with cargo in an amount satisfying a preset amount in the cargo box according to an embodiment of the present invention;
FIG. 5 is a schematic view of an abnormal cargo box state of a camera according to an embodiment of the invention;
fig. 6 is a schematic illustration of an alternative method of detecting the condition of a cargo container according to an embodiment of the invention;
FIG. 7 is a schematic view of an alternative method of detecting the condition of a cargo box according to an embodiment of the invention;
fig. 8 is a schematic view of a cargo box condition detection device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for detecting a cargo box status, where the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and where a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that described herein.
Fig. 1 shows a method for detecting the condition of a cargo box according to an embodiment of the invention, as shown in fig. 1, the method comprises the following steps:
step S102, acquiring a container image;
step S104, processing the container image by using a first network model to obtain a container state confidence coefficient;
step S106, determining the container state based on the confidence of the container state, wherein the container state comprises one of the following: the top cover of the container is closed, the goods are not loaded in the container, the quantity of the goods loaded in the container meets the preset quantity, the camera is abnormal, and the quantity of the goods loaded in the container does not meet the preset quantity.
According to the embodiment of the invention, after the container image is acquired, the container image is processed by using the first network model to obtain the container state confidence, and then the container state is determined based on the container state confidence, so that when the container state is detected, whether a top cover of the container is effectively closed or not can be detected, whether goods are loaded in the container or not can be detected, the capacity of the goods loaded in the container can be detected, and when the container state is not displayed in the acquired container image, the camera abnormity can be detected, so that the container state can be effectively detected. In the invention, the detection is realized by only adopting the camera, compared with the scheme of realizing the detection by using the radar in the prior art, the cost is reduced, and the technical problem that whether goods are loaded in the container cannot be detected by a method for detecting the state of the container in the related technology is solved.
The following is a detailed description of the above embodiments.
Step S102, container images are obtained.
The container image in the above step may be a container image of a muck truck, a container image of a truck, and a container image placed in a warehouse, which is not limited herein. It should be noted that the container image may be a container image acquired by a common camera, or may be an infrared container image acquired by an infrared camera. In the environment of insufficient illumination such as night, the dim fuzzy quality of image that ordinary camera acquireed is low, can exert an influence to the precision that follow-up packing box state detected, so can adopt infrared camera to gather the image in the environment of insufficient illumination, and follow-up packing box state detection high accuracy ground is realized through gathering high-quality image earlier stage assurance.
In the above steps, the obtained container image may be a current container image or a container image stored in history.
The container image in the above steps can be obtained by a camera mounted on the container, and also can be obtained by a camera mounted on the vehicle body. It should be noted that the camera may acquire an image of the cargo box at a fixed position, and the image of the cargo box may show the state of the top cover of the cargo box and the state of the cargo loaded in the cargo box.
In an alternative embodiment, the current container image of the muck car may be obtained by a camera mounted on the muck car.
And step S104, processing the container image by using the first network model to obtain the confidence coefficient of the container state.
The first network model in the above steps may be CNN (convolutional neural network), and such a first network model may simulate visual cortical decomposition and analyze image data.
It should be noted that the first network model of the present application can directly obtain the confidence degrees of all container states, that is, the present application can obtain the confidence degrees of a plurality of container states only by using one first network model according to the container image, thereby improving the speed of determining the container state.
The cargo box state in the steps can be that the top cover of the cargo box is closed, the cargo in the cargo box is not loaded, the cargo quantity of the cargo loaded in the cargo box meets the preset quantity, the camera is abnormal, and the cargo quantity of the cargo loaded in the cargo box does not meet the preset quantity. The goods not loaded in the container can mean that the container is empty, the quantity of the goods loaded in the container meets the preset quantity can mean that the container is full and the container is overloaded, and the like, and the goods can be specifically determined according to the specific value of the preset quantity. The confidence output of the first network model in the application not only comprises the opening and closing state of the top cover of the container, but also comprises the detection of the cargo quantity state in the container and the camera state.
The confidence level is also called reliability, confidence level, i.e. how large the corresponding probability of the estimated value and the overall parameter are within a certain allowable error range, and this corresponding probability is called confidence level. The container state confidence in the above steps refers to the probability of each container state obtained by analyzing the container image through the first network model.
Optionally, processing the container image by using the first network model to obtain a container state confidence coefficient includes: clipping the container image to obtain a target area image; preprocessing the target area image to obtain a processed image; and inputting the processed image into a first network model to obtain the confidence coefficient of the container state.
The target Region in the above step may be an ROI (Region of interest) in the container image, and the identification of the container state may be performed with respect to a preset Region of interest in consideration of a fact that a scene captured by the camera is changed. For example, the area where the top cover of the container is located in the container image can be referred to, and the area can be determined according to actual detection requirements.
Specifically, the image can be cropped through the ROI to obtain a target area image; the ROI is used for appointing the target area in the read image, so that only the target area image where the container is located is classified and identified in the following process, the detection processing time can be reduced, the detection precision is improved, and great convenience can be brought to image processing. Where the Range of rows and columns of interest can be specified using Range (Range function), which is a contiguous sequence from the start index to the end index; rectangular Rect (rectangular function) framing may also be used, specifying the coordinates of the top left corner of rectangular Rect and the length and width of the rectangle.
Optionally, the pre-processing comprises at least one of: scaling and normalization.
The scaling in the above steps refers to a process of adjusting the size of the digital image. Image scaling requires a trade-off in processing efficiency and resulting smoothness and sharpness, and as an image increases in size, the visibility of the pixels making up the image becomes higher; conversely, reducing an image will enhance its smoothness and sharpness. The fixed size of the image is selected based on the actual requirements of the first network model on the identification speed and the identification precision, and the same standard is adopted in the identification process aiming at different collected images, so that the information output by the network is ensured to be more accurate. Illustratively, the target area image may be scaled to a fixed size: 128*128.
The normalization process in the above step means to limit the data to a certain range. The normalization processing does not change the image information, but can eliminate the initial invalid influence of the network caused by the singular image, and can prevent the gradient explosion and the like generated after the image enters the first network model. The normalization processing method can be maximum-minimum standardization, Z-score standardization, function transformation and the like; wherein, the maximum-minimum standardization refers to the linear change of the original data; z-score normalization is a normalization of data based on the mean and standard deviation of the raw data.
Specifically, the container image may be cropped through the ROI to obtain a target region, and then the target region is scaled to a fixed size: 128 × 128, then performing normalization processing on the scaled target area, specifically, subtracting an average value 128 from a pixel value of the target area, then dividing by a square difference 256, and finally inputting the processed image into a convolutional neural network to obtain a confidence coefficient of the container state.
Optionally, the container state confidence comprises: the confidence coefficient is used for representing a first confidence coefficient of the top cover closing of the container, a second confidence coefficient of the unloaded goods in the container, a third confidence coefficient of the quantity of the goods loaded in the container, which meets a preset quantity, a fourth confidence coefficient of the camera, which is abnormal, and a fifth confidence coefficient of the quantity of the goods loaded in the container, which does not meet the preset quantity.
In an alternative embodiment, the current container image may be processed by using a convolutional neural network, and the output result of the first network model is that the confidence that the top cover of the container is closed is 80%, the confidence that the cargo is not loaded in the container is 5%, the confidence that the amount of the cargo loaded in the container satisfies the preset amount is 5%, the confidence that the camera is abnormal is 5%, and the confidence that the amount of the cargo loaded in the container does not satisfy the preset amount is 5%.
And S106, determining the container state based on the confidence of the container state.
Wherein, the packing box state includes one of following: the top cover of the container is closed, the goods are not loaded in the container, the quantity of the goods loaded in the container meets the preset quantity, the camera is abnormal, and the quantity of the goods loaded in the container does not meet the preset quantity.
Figure 2 is a schematic view of the container with the top cover closed; fig. 3 is a schematic view of the container with no cargo loaded therein; fig. 4 is a schematic view showing a state of the cargo box in which the cargo quantity of the cargo loaded in the cargo box satisfies a predetermined quantity; fig. 5 is a schematic view showing a cargo box state with an abnormal camera.
The top cover of the container is closed, namely the container can be normally closed after the goods are not loaded to meet the preset amount or the goods are not loaded in the container; the opening of the top cover of the container can be normal opening of the top cover of the container when the container is empty or the amount of the cargo loaded in the container does not meet the preset amount, or normal closing of the top cover of the container can not be caused by too much cargo loaded in the container; the camera can shelter from or the camera takes place to deflect for the camera unusually, leads to the unable state in the effective detection packing box.
Optionally, determining the container state based on the container state confidence comprises: extracting the container state corresponding to the highest value of the first confidence coefficient, the second confidence coefficient, the third confidence coefficient, the fourth confidence coefficient and the fifth confidence coefficient as a container state; or extracting the container state with the first confidence coefficient, the second confidence coefficient, the third confidence coefficient, the fourth confidence coefficient and the fifth confidence coefficient larger than a preset threshold value as a container state; or extracting the highest value of the first confidence coefficient, the second confidence coefficient, the third confidence coefficient, the fourth confidence coefficient and the fifth confidence coefficient, wherein the container state corresponding to the confidence coefficient greater than the preset threshold value is the container state.
In an alternative embodiment, the highest confidence degree of the confidence degrees of the container states output by the first network model may be selected, and the container state is determined to be the container state corresponding to the highest confidence degree, so as to ensure the accuracy of the container state.
In another alternative embodiment, a preset threshold may be set, and when the confidence is greater than the preset threshold, the container state is determined to be the container state corresponding to the confidence, so as to ensure accuracy of the obtained container state. When the first confidence coefficient is larger than a second preset value, determining that the container state is that the top cover of the container is closed; when the second confidence coefficient is larger than a third preset value, determining that the container state is that the container is not loaded with goods; when the third confidence coefficient is larger than the fourth preset value, determining that the container state is that the cargo quantity of the cargo loaded in the container meets the preset quantity; when the fourth confidence coefficient is larger than the fifth preset value, determining that the container state is abnormal; and when the fifth confidence coefficient is larger than the sixth preset value, determining that the container state is that the cargo quantity of the cargo loaded in the container does not meet the preset quantity.
In yet another alternative embodiment, the container state may be determined according to the highest confidence and a preset threshold, and when the confidence satisfies the two conditions, the container state is determined as the container state corresponding to the confidence, so as to further improve the accuracy of the container state.
Optionally, determining the container state based on the container state confidence comprises: and determining the container state based on the container state confidence coefficient and a preset rule, wherein the preset rule is used for representing the priority of the container state confidence coefficient.
Specifically, the judgment can be performed according to the judgment priority of the first confidence, the second confidence, the third confidence, the fourth confidence and the fifth confidence, and then the container state is output according to the judgment result. The judgment priority can be set by a user, and can also be randomly generated. The priority can be set according to the danger degree corresponding to the container state, for example, the camera is in a shielding or fault state, so that the states of the container top cover and the cargo quantity cannot be judged, and the cargo quantity loaded in the container is too much, so that the container top cover cannot be normally closed, and road environment and potential safety hazards are brought.
Optionally, determining the container state based on the container state confidence comprises: judging whether the fourth confidence coefficient is greater than a fifth preset value; if the fourth confidence coefficient is greater than the fifth preset value, determining that the container state is abnormal; if the fourth confidence coefficient is smaller than or equal to a fifth preset value, judging whether the first confidence coefficient is larger than a second preset value; if the first confidence coefficient is larger than a second preset value, determining that the container state is that the top cover of the container is closed; if the first confidence coefficient is smaller than or equal to a second preset value, judging whether the second confidence coefficient is larger than a third preset value; if the second confidence coefficient is greater than a third preset value, determining that the container state is that the container is not loaded with goods; if the second confidence coefficient is less than or equal to a third preset value, judging whether the third confidence coefficient is greater than a fourth preset value; if the third confidence coefficient is larger than the fourth preset value, determining that the container state is that the cargo quantity of the cargo loaded in the container meets the preset quantity; if the third confidence coefficient is smaller than or equal to a fourth preset value, judging whether the fifth confidence coefficient is larger than a sixth preset value; and if the fifth confidence coefficient is greater than the sixth preset value, determining that the container state is that the cargo quantity of the cargo loaded in the container does not meet the preset quantity.
In the above steps, the order of the confidence level determination priority from high to low is: the fourth confidence coefficient, the first confidence coefficient, the second confidence coefficient, the third confidence coefficient and the fifth confidence coefficient.
In an optional embodiment, whether the fourth confidence coefficient is greater than a fifth preset value or not may be judged, if the fourth confidence coefficient is greater than the fifth preset value, it is indicated that the camera of the muck truck is abnormal, and at this time, data of other confidence coefficients are inaccurate, so that the cargo box state can be directly determined to be that the camera is abnormal, and accuracy of determining the cargo box state is improved.
For example, after the first confidence degree is judged to be greater than the second preset value, the second confidence degree is smaller than the third preset value, the third confidence degree is smaller than the fourth preset value, and the fourth confidence degree is smaller than the fifth preset value, the fifth confidence degree is smaller than the sixth preset value, and the cargo box top cover closure represented by the first confidence degree is output as the cargo box state.
The preset amount in the above steps can be set by a user according to the capacity of the container. The second preset value, the third preset value, the fourth preset value, the fifth preset value and the sixth preset value may be the same or different, and are not limited herein.
According to the embodiment of the invention, after the container image is obtained, the container image is processed by using the first network model to obtain the container state confidence, and then the container state is determined based on the container state confidence, so that when the container state is detected, whether a top cover of the container is effectively closed or not can be detected, whether goods are loaded in the container or not can be detected, the capacity of the goods loaded in the container can be detected, and the detection cost can be reduced by only using the camera to obtain the container image. Such as: after the container image is obtained, the container image can be analyzed by utilizing the first network model to obtain the confidence coefficient of the closed top cover of the container, the confidence coefficient of the goods not loaded in the container, the confidence coefficient that the quantity of the goods loaded in the container meets the preset quantity, the confidence coefficient of the abnormal camera and the confidence coefficient that the quantity of the goods loaded in the container does not meet the preset quantity, the state of the container can be determined according to the confidence coefficients of various states of the container, not only can whether the top cover of the container is effectively closed be detected, but also whether the goods are loaded in the container can be detected, and the abnormal camera can be detected when the obtained container image does not show the state of the container, so that the state of the container can be effectively detected. And the technical problem that whether goods are loaded in the container or not cannot be detected by a method for detecting the state of the container in the related technology is solved. Optionally, the processing the container image by using the first network model to obtain a container state confidence coefficient further includes: extracting multi-frame container state confidence coefficients corresponding to the multi-frame container images; and inputting the confidence coefficient of the multi-frame container state into the second network model, and acquiring the confidence coefficient of the container state.
Further, in order to avoid the situations that the confidence result obtained based on the single-frame image detection is unstable, and false detection and missing detection occur, the container state is judged based on the confidence result obtained based on the multi-frame image detection. The selection of the frame number of multi-frame processing balances the processing efficiency and the result quality, so that the accuracy of the container state detection can be improved while the stability of the detection result is maintained. Specifically, the container state is continuously detected for 30s, multiple frames of images in the time period are extracted and enter the first network model respectively to obtain corresponding multiple frames of container state confidence degrees, the obtained multiple frames of container state confidence degrees are input into the second network model, and the final container state confidence degrees are obtained through training, wherein the second network model can be a support vector machine.
Optionally, the method further comprises: acquiring an original container image; processing the original container image to obtain a plurality of groups of training samples; and training the initial model by utilizing a plurality of groups of training samples to obtain a first network model.
The original container image in the above steps may be obtained through a camera, may also be obtained through a network, and may also be obtained from a file stored locally, which is not limited herein.
The training samples in the above steps, i.e. the parameters of the initial model, can be considered as the model system is established after training. The initial model may be an untrained model, or may be a model that has been trained at least once before.
It should be noted that the first network model already takes into account the factors of the common image and the infrared image in the training process, and therefore, the recognition is performed based on different types of images through the first network model to improve the recognition accuracy.
Illustratively, when the first network model is trained based on a common image and an infrared image respectively in the training process, and when the container image is detected to be the common image, the first network model can identify the container image based on the common image; when the detected container images are infrared images, the first network model can recognize the infrared images based on the training records of the infrared images, and the accuracy of container image recognition can be improved and the compatibility of the first network model can be expanded by training two different types of container images.
The first Network model in the above steps may be ResNet (Residual Neural Network).
Optionally, processing the original container image to obtain a plurality of sets of training samples includes: carrying out disturbance processing on the original container images to obtain a plurality of groups of original container images, wherein the disturbance processing is used for expanding and enhancing the original container images; cutting a plurality of groups of original container images to obtain a plurality of groups of target area images; and preprocessing the multiple groups of target area images to obtain multiple groups of training samples.
The environment information of the actual scene is complex and changeable, compared with the situation that the amount of the collected training sample data is limited, in order to still adapt to the changing environment scene in the actual detection, the original image sample is expanded and enhanced, and the training sample is enriched so that the anti-interference capability of the training sample to the complex environment is stronger.
In an optional embodiment, the original container images can be rotated, translated, scaled, noised, blurred, changed in illumination and changed in channel to obtain multiple groups of original container images, and then the multiple groups of original container images are cut through the ROI to obtain multiple groups of target area images; and after the multiple groups of target area images are zoomed to a fixed size, normalizing the zoomed multiple groups of target area images to obtain multiple groups of training samples.
In another optional embodiment, after preprocessing a plurality of groups of target area images, the method can perform perturbation processing again to obtain richer training samples, so that the complex environment has stronger anti-interference capability.
Optionally, the perturbation processing comprises at least one of: rotation, translation, scaling, noise addition, blurring, illumination variation, channel variation.
In an alternative embodiment, a 0 to 15 left-right rotation operation may be randomly selected for the original container image.
In another alternative embodiment, the original container image may be translated by 0 to 50 pixels up, down, left, right, and left.
In another alternative embodiment, the original container image may be randomly scaled by 0.8 to 1.2.
In another alternative embodiment, gaussian or salt and pepper noise with an average value of 0 to 4.0 may be randomly added to the original container image.
In another alternative embodiment, a gaussian blur with a template of 0 to 9 may be randomly added to the original container image.
In another alternative embodiment, the original container image may be randomly multiplied by an illumination transform of 0.8 to 1.2 per pixel.
In another alternative embodiment, the channels of the original container image may be converted, for example, converting the RGB channels to BRG channels.
Optionally, obtaining an image of the cargo box comprises: detecting whether the current light intensity is greater than a first preset value or not; if the current light intensity is larger than a first preset value, acquiring a container image by using a first camera; and if the current light intensity is less than or equal to the first preset value, acquiring the container image by using the second camera.
The first preset value in the above steps can be set by the user based on the requirement.
In the above steps, the first camera may be an RGB camera, and the second camera may be an IR camera; the RGB camera can acquire images under an illumination state; the IR camera is infrared camera, can acquire the image under the dark state.
In an alternative embodiment, it may be detected by the sensor whether the current light intensity is greater than a first preset value; if the current light intensity is larger than the first preset value, the current container is in an illumination state, and an image of the container can be obtained by using the RGB camera; if the current light intensity is smaller than or equal to the first preset value, the current container is in a dark state, and the container image can be obtained by using the IR camera.
Optionally, before the training the initial model with the multiple sets of training samples to obtain the first network model, the method further includes: and adding a normalization layer in the initial layer of the initial model, wherein the normalization layer is used for performing normalization processing on the multiple groups of training samples.
Because the infrared image and the common image are adopted simultaneously during the training of the first network model, in order to avoid the difference of the imaging of the infrared image and the common image, a batch normalization layer is added at the initial layer of the first network model, the data is normalized, and meanwhile, the influence caused by the two image qualities is gradually eliminated during the training of the batch normalization layer, so that the two images are mapped to the same sample space, and the recognition accuracy and the recognition speed of the container state of the first network model can be improved.
The trained first network model may output a container state confidence level including: the confidence coefficient is used for representing the first confidence coefficient of the top cover closing of the container, the second confidence coefficient of the unloaded goods in the container, the third confidence coefficient of the quantity of the goods loaded in the container, which meets the preset quantity, the fourth confidence coefficient of the camera abnormity, and the fifth confidence coefficient of the quantity of the goods loaded in the container, which does not meet the preset quantity.
A preferred embodiment of the present invention will be described in detail with reference to fig. 6. As shown in fig. 6, the method may include the steps of:
step S601, obtaining a muck truck cargo box image;
optionally, the camera can be selected according to the illumination intensity to acquire images of the muck truck cargo box. If the current illumination intensity is larger than a first preset value, it is indicated that the current container is in an illumination state, and the container image can be obtained by using the RGB camera; if the current light intensity is smaller than or equal to the first preset value, the current container is in a dark state, and the container image can be obtained by using the IR camera.
Step S602, cutting the container image to obtain a target area image;
optionally, the container image may be cropped through the ROI.
Step S603, preprocessing the target area image;
optionally, the preprocessing may be scaling processing and normalization processing; the target area image may be scaled to a fixed size: 128 x 128; the mean value 128 may be subtracted from the pixels of the target area image and divided by the square difference 256.
And step S604, sending the preprocessed target area image into a CNN network to obtain a container state confidence coefficient.
Optionally, the confidence of the container state includes one of the following: the confidence level of the top cover closing of the container, the confidence level of the unloaded goods in the container, the confidence level that the amount of the loaded goods in the container meets the preset value, the confidence level of the abnormal camera and the confidence level that the amount of the loaded goods in the container does not meet the preset value.
It should be noted that the closed top cover of the cargo box means that the lid of the muck truck is completely closed; the container is not loaded with goods, the quantity of the goods in the container meets a preset value, and the quantity of the goods in the container does not meet the preset value, which indicates that the top cover of the muck truck is in an open state; the camera abnormity indicates that the camera is blocked or the camera deflects to cause effective detection.
Step S605, based on the container state confidence and the artificial design logic rule, determines the container state of the muck truck.
Optionally, the container state of the muck truck can be determined based on the confidence of the container state and the muck truck container cover plate signal.
For example, within 30s of the signal sent by the container cover plate of the muck truck, the confidence coefficient of the state of the final container is output by inputting the confidence coefficient of the state of the support vector machine according to the multiple frames of container corresponding to the multiple frames of images collected within 30s, and whether the state of the container of the muck truck is that the container is not loaded with goods or the quantity of the goods in the container meets a preset value is judged according to the confidence coefficient of the state of the container.
Optionally, the muck truck container state may be determined based on the container state confidence and the transport state of the muck truck.
A preferred embodiment of the first network model training of the present invention is described in detail below with reference to fig. 7. As shown in fig. 7, the method may include the steps of:
step S701, acquiring an original container image of the muck truck;
in the above steps, the original container image of the muck truck may be obtained through a camera, a network, or a file stored locally, which is not limited herein.
Optionally, if the original container image of the muck car is obtained through the camera, the camera can be selected according to the illumination intensity to obtain the original container image of the muck car. If the current illumination intensity is larger than the first preset value, the current container is in an illumination state, and an original container image can be obtained by using the RGB camera; if the current light intensity is smaller than or equal to the first preset value, the current container is in a dark state, and an original container image can be obtained by using the IR camera.
Step S702, carrying out disturbance processing on the original container images to obtain a plurality of groups of original container images;
optionally, the original container images may be rotated, translated, scaled, noised, blurred, changed in illumination, and changed in channels to obtain multiple sets of original container images.
Alternatively, a 0 to 15 left-right rotation operation may be randomly selected for the original container image.
Alternatively, the original container image may be translated by 0 to 50 pixels up, down, left, right, and so on.
Optionally, the original container image may be randomly scaled by 0.8 to 1.2.
Optionally, gaussian or salt and pepper noise with an average value of 0 to 4.0 may be randomly added to the original container image.
Optionally, a gaussian blur with a template of 0 to 9 may be randomly added to the original container image.
Alternatively, the original container image may be multiplied randomly by an illumination transform of 0.8 to 1.2 for each pixel.
Optionally, the channels of the original container image may be converted, for example, RGB channels are converted into BRG channels.
Step S703, clipping multiple groups of original container images to obtain multiple groups of target area images;
optionally, the multiple sets of original container images may be cropped through the ROI to obtain multiple sets of target region images.
Step S704, preprocessing a plurality of groups of target area images to obtain a plurality of groups of training samples;
wherein the preprocessing comprises scaling processing and normalization processing.
Optionally, the sets of target region images may be scaled to a fixed size: 128 × 128, then subtracting the mean value 128 from the pixels of the scaled multiple sets of target area images, and dividing by the square difference 256 to obtain multiple sets of training samples.
Step S705, a first network model is constructed, and a plurality of groups of training samples are sent into the first network model for training.
Optionally, the constructed first network model may be ResNet-10.
Optionally, the confidence level output by the trained first network model has five categories: the confidence coefficient of the closing of the top cover of the container, the confidence coefficient of the goods not loaded in the container, the confidence coefficient that the goods quantity of the goods loaded in the container meets the preset quantity confidence coefficient, the confidence coefficient of the abnormality of the camera and the confidence coefficient that the goods quantity of the goods loaded in the container does not meet the preset quantity confidence coefficient.
It should be noted that the top cover of the container can be normally closed after the container is loaded with goods; the unloaded goods in the container can be opened for the top cover of the container, and the container is empty; the cargo quantity of the cargo loaded in the cargo box meets the preset quantity, so that the top cover of the cargo box cannot be normally closed due to too much cargo quantity loaded in the cargo box, and the top cover of the cargo box cannot be normally closed; the camera can be sheltered from or the camera takes place to deflect for the camera unusually, leads to the state in the unable effective detection packing box.
Example 2
According to the embodiment of the present invention, a device for detecting a container state is further provided, where the device may perform the method for detecting a container state in the foregoing embodiment, and a specific implementation manner and a preferred application scenario are the same as those in the foregoing embodiment, and are not described herein again.
Fig. 8 is a schematic view of a container state detection apparatus according to an embodiment of the present invention, as shown in fig. 8, the apparatus including:
an acquisition module 82 for acquiring the container image.
A processing module 84, configured to process the container image using the first network model to obtain a container state confidence.
A determination module 86 for determining a container state based on the container state confidence, wherein the container state comprises one of: the top cover of the container is closed, the goods are not loaded in the container, the quantity of the goods loaded in the container meets the preset quantity, the camera is abnormal, and the quantity of the goods loaded in the container does not meet the preset quantity.
Optionally, the processing module comprises: the cutting unit is used for cutting the container image to obtain a target area image; the preprocessing unit is used for preprocessing the target area image to obtain a processed image; and the input unit is used for inputting the processed image into the first network model to obtain the confidence coefficient of the container state.
Optionally, the confidence levels of the container state in the processing module include a first confidence level indicating that the top cover of the container is closed, a second confidence level indicating that no cargo is loaded in the container, a third confidence level indicating that the cargo amount loaded with cargo in the container satisfies a preset amount, a fourth confidence level indicating that the camera is abnormal, and a fifth confidence level indicating that the cargo amount loaded with cargo in the container does not satisfy the preset amount.
Optionally, the processing module is further configured to extract a confidence of a multi-frame container state corresponding to the multi-frame container image; the processing module is further used for inputting the confidence coefficient of the multi-frame container state into the second network model and obtaining the confidence coefficient of the container state.
Optionally, the determining module includes: the first extracting unit is used for extracting the container state corresponding to the highest value of the first confidence coefficient, the second confidence coefficient, the third confidence coefficient, the fourth confidence coefficient and the fifth confidence coefficient as the container state; the second extraction unit is used for extracting the container state of which the first confidence coefficient, the second confidence coefficient, the third confidence coefficient, the fourth confidence coefficient and the fifth confidence coefficient are larger than a preset threshold value as the container state; and the third extraction unit is used for extracting the highest value of the first confidence coefficient, the second confidence coefficient, the third confidence coefficient, the fourth confidence coefficient and the fifth confidence coefficient, and the container state corresponding to the confidence coefficient greater than the preset threshold value is the container state.
Optionally, the determining module is further configured to determine the container state based on the container state confidence and a preset rule, where the preset rule is used to represent a priority of the preset rule for representing the container state confidence.
Optionally, the determining module further comprises: the first judgment unit is used for judging whether the fourth confidence coefficient is greater than a fifth preset value or not, and if the fourth confidence coefficient is greater than the fifth preset value, the cargo box state is determined to be abnormal; the second judging unit is used for judging whether the first confidence coefficient is greater than a second preset value or not if the fourth confidence coefficient is less than or equal to a fifth preset value, and determining that the container state is that the top cover of the container is closed if the first confidence coefficient is greater than the second preset value; the third judging unit is used for judging whether the second confidence coefficient is greater than a third preset value or not if the first confidence coefficient is smaller than or equal to the second preset value, and determining that the container state is that no goods are loaded in the container if the second confidence coefficient is greater than the third preset value; the fourth judging unit is used for judging whether the third confidence coefficient is greater than a fourth preset value or not if the second confidence coefficient is smaller than or equal to the third preset value, and determining that the cargo amount of the cargo loaded in the cargo box meets the preset amount if the third confidence coefficient is greater than the fourth preset value; and the fifth judging unit is used for judging whether the fifth confidence coefficient is greater than a sixth preset value or not if the third confidence coefficient is less than or equal to a fourth preset value, and determining that the cargo amount of the cargo loaded in the cargo box does not meet the preset amount if the fifth confidence coefficient is greater than the sixth preset value.
Optionally, the pre-processing in the pre-processing unit comprises at least one of: scaling and normalization.
Optionally, the apparatus further comprises: the acquisition module is also used for acquiring an original container image; the processing module is also used for processing the original container images to obtain a plurality of groups of training samples; and the training module is used for training the initial model by utilizing a plurality of groups of training samples to obtain a first network model.
Optionally, the processing module further comprises: the disturbance processing unit is used for carrying out disturbance processing on the original container images to obtain a plurality of groups of original container images; the cutting unit is also used for cutting a plurality of groups of original container images to obtain a plurality of groups of target area images; the preprocessing unit is also used for preprocessing the multiple groups of target area images to obtain multiple groups of training samples.
Optionally, the perturbation processing in the perturbation processing unit includes at least one of: rotation, translation, scaling, noise addition, blurring, illumination variation, channel variation.
Optionally, the obtaining module further includes: the detection unit is used for detecting whether the current light intensity is larger than a first preset value or not; the first acquisition unit is used for acquiring a container image by using a first camera when the current light intensity is greater than a first preset value; and the second acquisition unit is used for acquiring the container image by using the second camera when the current light intensity is less than or equal to the first preset value.
Optionally, the apparatus further includes a processing module configured to add a normalization layer to an initial layer of the initial model, where the normalization layer is configured to perform normalization processing on the plurality of sets of training samples.
Example 3
According to an embodiment of the present invention, there is also provided a computer-readable storage medium, where the computer-readable storage medium includes a stored program, and when the program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the method for detecting the cargo box state in embodiment 1.
Example 4
According to an embodiment of the present invention, there is further provided a processor, where the processor is configured to execute a program, where the program executes the method for detecting the container state in embodiment 1.
The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit may be a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (16)

1. A method of detecting the condition of a cargo box, comprising:
acquiring a container image;
processing the container image by using a first network model to obtain a container state confidence coefficient;
determining a container state based on the container state confidence, wherein the container state comprises one of: the top cover of the container is closed, the goods are not loaded in the container, the quantity of the goods loaded in the container meets the preset quantity, the camera is abnormal, and the quantity of the goods loaded in the container does not meet the preset quantity.
2. The method of claim 1, wherein processing the container image using a first network model to obtain a container state confidence comprises:
clipping the container image to obtain a target area image;
preprocessing the target area image to obtain a processed image;
and inputting the processed image into the first network model to obtain the confidence of the container state.
3. The method of claim 2, wherein the container state confidence comprises: the first confidence coefficient is used for representing the top cover closing of the container, the second confidence coefficient is used for representing the goods which are not loaded in the container, the third confidence coefficient is used for representing that the goods quantity loaded in the container meets the preset quantity, the fourth confidence coefficient is used for representing the camera abnormity, and the fifth confidence coefficient is used for representing that the goods quantity loaded in the container does not meet the preset quantity.
4. The method of claim 1, wherein the container image is processed using a first network model to obtain a container state confidence, further comprising:
extracting multi-frame container state confidence degrees corresponding to the container images;
and inputting the confidence coefficient of the multi-frame container state into a second network model, and acquiring the confidence coefficient of the container state.
5. The method of claim 3, wherein determining a container state based on the container state confidence comprises:
extracting a container state corresponding to the highest value of the first confidence coefficient, the second confidence coefficient, the third confidence coefficient, the fourth confidence coefficient and the fifth confidence coefficient as the container state; or the like, or a combination thereof,
extracting the container state with the first confidence, the second confidence, the third confidence, the fourth confidence and the fifth confidence being greater than a preset threshold as the container state; or the like, or, alternatively,
and extracting the highest value of the first confidence coefficient, the second confidence coefficient, the third confidence coefficient, the fourth confidence coefficient and the fifth confidence coefficient, wherein the container state corresponding to the confidence coefficient greater than a preset threshold value is the container state.
6. The method of claim 3, wherein determining a container state based on the container state confidence comprises: determining the container state based on the container state confidence and a preset rule, wherein the preset rule is used for representing the priority of the container state confidence.
7. The method of claim 6, wherein determining the container state based on the container state confidence comprises:
judging whether the fourth confidence coefficient is larger than a fifth preset value;
if the fourth confidence coefficient is greater than the fifth preset value, determining that the container state is the camera abnormity;
if the fourth confidence coefficient is less than or equal to the fifth preset value, judging whether the first confidence coefficient is greater than a second preset value;
if the first confidence is greater than the second preset value, determining that the container state is that the container top cover is closed;
if the first confidence coefficient is smaller than or equal to the second preset value, judging whether the second confidence coefficient is larger than a third preset value;
if the second confidence coefficient is greater than the third preset value, determining that the container state is that no goods are loaded in the container;
if the second confidence coefficient is smaller than or equal to the third preset value, judging whether the third confidence coefficient is larger than a fourth preset value;
if the third confidence coefficient is greater than the fourth preset value, determining that the container state is that the cargo quantity of the cargo loaded in the container meets a preset quantity;
if the third confidence coefficient is smaller than or equal to the fourth preset value, judging whether the fifth confidence coefficient is larger than a sixth preset value;
and if the fifth confidence coefficient is greater than the sixth preset value, determining that the container state is that the cargo quantity of the cargo loaded in the container does not meet the preset quantity.
8. The method of claim 2, wherein the pre-processing comprises at least one of: scaling and normalization.
9. The method of claim 1, further comprising:
acquiring an original container image;
processing the original container images to obtain a plurality of groups of training samples;
and training an initial model by using the plurality of groups of training samples to obtain the first network model.
10. The method of claim 9, wherein processing the raw container images to obtain a plurality of sets of training samples comprises:
disturbing the original container images to obtain a plurality of groups of original container images;
cutting the multiple groups of original container images to obtain multiple groups of target area images;
and preprocessing the multiple groups of target area images to obtain multiple groups of training samples.
11. The method of claim 10, wherein the perturbation process comprises at least one of: rotation, translation, scaling, noise addition, blurring, illumination variation, channel variation.
12. The method of claim 1, wherein obtaining an image of the cargo box comprises:
detecting whether the current light intensity is greater than a first preset value;
if the current light intensity is larger than the first preset value, acquiring the container image by using a first camera;
and if the current light intensity is smaller than or equal to the first preset value, acquiring the container image by using a second camera.
13. The method of claim 9, wherein before training an initial model with the plurality of sets of training samples to obtain the first network model, the method further comprises:
and adding a normalization layer in an initial layer of the initial model, wherein the normalization layer is used for performing normalization processing on the multiple groups of training samples.
14. A cargo box condition detection device, comprising:
the acquisition module is used for acquiring a container image;
the processing module is used for processing the container image by utilizing a first network model to obtain a container state confidence coefficient;
a determination module to determine a container state based on the container state confidence, wherein the container state comprises one of: the top cover of the container is closed, the goods are not loaded in the container, the quantity of the goods loaded in the container meets the preset quantity, the camera is abnormal, and the quantity of the goods loaded in the container does not meet the preset quantity.
15. A storage medium, characterized in that the storage medium comprises a stored program, wherein when the program runs, the storage medium is controlled to execute the method for detecting the state of the container according to any one of claims 1 to 13.
16. A processor for running a program, wherein the program is run to perform the method of detecting the status of a container as claimed in any one of claims 1 to 13.
CN202011589677.5A 2020-12-28 2020-12-28 Method and device for detecting state of container Pending CN114693588A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202011589677.5A CN114693588A (en) 2020-12-28 2020-12-28 Method and device for detecting state of container
DE112021006736.2T DE112021006736T5 (en) 2020-12-28 2021-12-27 METHOD AND DEVICE FOR DETECTING THE CONDITION OF A LOADING TRAY
PCT/CN2021/141762 WO2022143562A1 (en) 2020-12-28 2021-12-27 Cargo container state detection method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011589677.5A CN114693588A (en) 2020-12-28 2020-12-28 Method and device for detecting state of container

Publications (1)

Publication Number Publication Date
CN114693588A true CN114693588A (en) 2022-07-01

Family

ID=82131782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011589677.5A Pending CN114693588A (en) 2020-12-28 2020-12-28 Method and device for detecting state of container

Country Status (3)

Country Link
CN (1) CN114693588A (en)
DE (1) DE112021006736T5 (en)
WO (1) WO2022143562A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103869789A (en) * 2014-03-26 2014-06-18 卢大伟 Transported cargo quality monitoring method and system based on internet of things
US20180260772A1 (en) * 2017-01-31 2018-09-13 Focal Systems, Inc Out-of-stock detection based on images
CN107122747B (en) * 2017-04-28 2020-06-09 北京理工大学 Non-contact detection device and method for railway vehicle carriage state
CN108416901A (en) * 2018-03-27 2018-08-17 合肥美的智能科技有限公司 Method and device for identifying goods in intelligent container and intelligent container
CN109003304A (en) * 2018-07-12 2018-12-14 南京云计趟信息技术有限公司 A kind of camera angle mobile detecting system and method based on deep learning
CN109341763B (en) * 2018-10-10 2020-02-04 广东长盈科技股份有限公司 Transportation data acquisition system and method based on Internet of things

Also Published As

Publication number Publication date
DE112021006736T5 (en) 2023-10-26
WO2022143562A1 (en) 2022-07-07

Similar Documents

Publication Publication Date Title
CN110310264B (en) DCNN-based large-scale target detection method and device
CN112132156B (en) Image saliency target detection method and system based on multi-depth feature fusion
Pape et al. 3-D histogram-based segmentation and leaf detection for rosette plants
US8238605B2 (en) Digital video target moving object segmentation method and system
JP4877374B2 (en) Image processing apparatus and program
WO2017103668A1 (en) Logo detection system for automatic image search engines
CN113378648A (en) Artificial intelligence port and wharf monitoring method
CN113781421A (en) Underwater-based target identification method, device and system
CN114140844A (en) Face silence living body detection method and device, electronic equipment and storage medium
CN113743378B (en) Fire monitoring method and device based on video
US20190385283A1 (en) Image pre-processing for object recognition
CN112733823B (en) Method and device for extracting key frame for gesture recognition and readable storage medium
US10055668B2 (en) Method for the optical detection of symbols
CN110782392B (en) Image processing method, device, electronic equipment and storage medium
CN113065454A (en) High-altitude parabolic target identification and comparison method and device
WO2024016632A1 (en) Bright spot location method, bright spot location apparatus, electronic device and storage medium
CN114693588A (en) Method and device for detecting state of container
CN111402185A (en) Image detection method and device
CN114926631A (en) Target frame generation method and device, nonvolatile storage medium and computer equipment
CN115100457A (en) SAR image target detection method combining deep learning and CFAR
CN114708247A (en) Cigarette case packaging defect identification method and device based on deep learning
CN111027560B (en) Text detection method and related device
CN113569859A (en) Image processing method and device, electronic equipment and storage medium
CN117456371B (en) Group string hot spot detection method, device, equipment and medium
CN115909254B (en) DMS system based on camera original image and image processing method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination