CN117893891B - Space utilization rate measuring and calculating method and system based on machine learning - Google Patents

Space utilization rate measuring and calculating method and system based on machine learning Download PDF

Info

Publication number
CN117893891B
CN117893891B CN202410269739.6A CN202410269739A CN117893891B CN 117893891 B CN117893891 B CN 117893891B CN 202410269739 A CN202410269739 A CN 202410269739A CN 117893891 B CN117893891 B CN 117893891B
Authority
CN
China
Prior art keywords
cargo
image
images
background
machine learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410269739.6A
Other languages
Chinese (zh)
Other versions
CN117893891A (en
Inventor
吴朕
曾海坚
张巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Anytrek Technology Co ltd
Original Assignee
Shenzhen Anytrek Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Anytrek Technology Co ltd filed Critical Shenzhen Anytrek Technology Co ltd
Priority to CN202410269739.6A priority Critical patent/CN117893891B/en
Publication of CN117893891A publication Critical patent/CN117893891A/en
Application granted granted Critical
Publication of CN117893891B publication Critical patent/CN117893891B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a space utilization rate measuring and calculating method based on machine learning, which comprises the following steps: shooting to obtain a plurality of empty images of the space to be tested when the goods are not stored; preprocessing all acquired idle images to form a training data set; inputting the formed training data set into a machine learning algorithm to perform recognition training of image background, and obtaining a background recognition model; shooting and obtaining at least one cargo image of the space to be tested when goods are stored; preprocessing all acquired cargo images to form a test data set; inputting the formed test data set into the background recognition model to recognize image background, and separating cargo areas in all cargo images; and measuring and calculating the space utilization rate of the space to be measured according to the separated cargo area and the empty load image. The method can automatically measure and calculate the space utilization rate, and has the advantages of high precision, high efficiency, low labor cost and the like. The invention also discloses a system for realizing the method.

Description

Space utilization rate measuring and calculating method and system based on machine learning
Technical Field
The invention relates to application of machine learning in the field of logistics transportation management, in particular to a space utilization rate measuring and calculating method and system based on machine learning.
Background
In the field of logistics transportation, a warehouse is required to store goods, and a truck is also required to transport goods, so that the space utilization rate of spaces such as the warehouse or the freight warehouse of the truck is an important index for logistics transportation management.
The existing space utilization rate measurement mainly depends on manual judgment, and rough estimation is performed by visually observing the goods accumulation condition. The method has the problems of subjective judgment, high working strength and low efficiency, and cannot realize accurate measurement of the space utilization rate of the cargo hold.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a space utilization rate measuring and calculating method and system based on machine learning, which can automatically measure and calculate the space utilization rate and have the advantages of high precision, high efficiency, low labor cost and the like.
The technical problems to be solved by the invention are realized by the following technical scheme:
a space utilization rate measuring and calculating method based on machine learning comprises the following steps:
step 100: shooting to obtain a plurality of empty images of the space to be tested when the goods are not stored;
Step 200: preprocessing all acquired idle images to form a training data set;
Step 300: inputting the formed training data set into a machine learning algorithm to perform recognition training of image background, and obtaining a background recognition model;
step 400: shooting and obtaining at least one cargo image of the space to be tested when goods are stored;
Step 500: preprocessing all acquired cargo images to form a test data set;
step 600: inputting the formed test data set into the background recognition model to recognize image background, and separating cargo areas in all cargo images;
step 700: and measuring and calculating the space utilization rate of the space to be measured according to the separated cargo area and the empty load image.
Further, the training data set includes a plurality of linked list nodes, one linked list node corresponds to one empty image, one node in the linked list nodes corresponds to one pixel point in the empty image, each node includes a parameter value and a label corresponding to the pixel point, the parameter value includes a position parameter and a gray scale parameter, and the label is a background.
Further, in step 200, the step of preprocessing all the acquired idle images is as follows:
Step 210: traversing all pixel points in a single empty image, and extracting parameter values of all pixel points in the single empty image;
Step 220: establishing a background array, and adding the parameter values of all pixel points in the extracted single Zhang Kongzai image into the background array;
step 230: establishing a linked list node, adding the acquired parameter values in the background array into the linked list node, and adding a label for each node, wherein the label is a background;
step 240: steps 210 to 230 are repeated until all empty images have corresponding linked list nodes established.
Further, the test data set includes a plurality of cargo arrays, one cargo array corresponding to each cargo image, the cargo arrays including parameter values corresponding to all pixels in the cargo image, the parameter values including a position parameter and a gray scale parameter.
Further, in step 500, the steps of preprocessing all acquired cargo images are as follows:
step 510: traversing all pixel points in the single cargo image, and extracting parameter values of all pixel points in the single cargo image;
step 520: establishing a cargo array, and adding the parameter values of all pixel points in the extracted single Zhang Zaihuo image into the cargo array;
step 530: steps 510-520 are repeated until all cargo images have a corresponding cargo array established.
Further, the machine learning algorithm includes a KNN algorithm.
Further, in step 600, the step of inputting the formed test data set into the background recognition model to recognize the image background and separating the cargo areas in all cargo images is as follows:
Step 610: setting a D value and a K value of the KNN algorithm;
step 620: creating a full 255 mask map for the single Zhang Zaihuo images;
step 630: extracting all adjacent pixel points, the Euclidean distance between the adjacent pixel points and a certain pixel point of a single cargo image, in all the empty images is within the D value;
step 640: calculating the gray difference value between the pixel point and the adjacent pixel points in all empty images;
step 650: extracting K gray level difference values with minimum difference values;
step 660: judging whether the K gray difference values are lower than a preset gray clustering threshold value or not, and counting the times of the K gray difference values lower than the gray clustering threshold value;
Step 670: judging whether the number of times lower than the gray clustering threshold value in the K gray difference values exceeds a preset number of times threshold value, if yes, judging the pixel point as a background, and if not, judging the pixel point as goods;
Step 680: repeating steps 630 to 670 until all the pixels which are judged to be cargoes in the single Zhang Zaihuo image are obtained, and setting the gray scale parameters of all the pixels which have the same position parameters as the pixels which are judged to be the background in the 255 mask map to 0 from 255 to obtain the cargo area in the single cargo image;
Step 690: steps 620 through 680 are repeated until cargo areas in all cargo images are obtained.
Further, in step 300, the machine learning algorithm is trained to recognize the image background in a multithreaded parallel manner.
Further, in step 700, the space utilization rate η= (m/a) ×γ 2 ×100% of the space to be measured, where m is the number of pixels in the cargo area, a is the number of pixels in the empty image, and γ is the correction parameter of imaging.
A space utilization measuring and calculating system based on machine learning comprises a shooting module, a first preprocessing module, a second preprocessing module, a machine learning module and a measuring and calculating module, wherein
The shooting module is used for shooting and acquiring a plurality of empty images and at least one cargo image of the space to be detected when the cargo is not stored and the cargo is stored respectively;
The first preprocessing module is used for preprocessing all acquired idle images to form a training data set;
The second preprocessing module is used for preprocessing all acquired cargo images to form a test data set;
The machine learning module is used for inputting the formed training data set into a machine learning algorithm to perform recognition training of the image background, obtaining a background recognition model, inputting the formed test data set into the background recognition model to recognize the image background, and separating cargo areas in all cargo images;
the measuring and calculating module is used for measuring and calculating the space utilization rate of the space to be measured according to the separated cargo area and the empty image.
The invention has the following beneficial effects: the method utilizes computer vision and machine learning algorithm to realize automatic measurement and calculation of space utilization, has high degree of automation, greatly simplifies the manual statistics work in the past, avoids the problem of low manual statistics accuracy, uses KNN algorithm to divide images so as to separate out cargo areas, can adapt to illumination change, has better robustness compared with the traditional image division method, can repeatedly measure, monitors the space utilization condition to be measured in real time, and is convenient for subsequent space optimization; meanwhile, the hardware cost is low, only the camera with fixed position is required to be installed, the measuring and calculating process is automatically completed through a software algorithm, the method has certain universality, can be applied to the calculation of the utilization rate of various closed spaces, and has important technical progress significance.
Drawings
Fig. 1 is a block diagram of steps of a space utilization measuring and calculating method provided by the invention.
Fig. 2 is a block diagram showing the steps 200 of the space utilization measurement method according to the present invention.
Fig. 3 is a block diagram illustrating steps 500 in the space utilization measurement method according to the present invention.
Fig. 4 is a block diagram illustrating steps 600 in the space utilization measurement method according to the present invention.
Fig. 5 is a schematic block diagram of a space utilization measurement system provided by the present invention.
Detailed Description
The present invention is described in detail below with reference to the drawings and the embodiments, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
In the description of the present invention, it should be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the present invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", or a third "may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured," "disposed," and the like are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, or can be communicated between two elements or the interaction relationship between the two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
Example 1
As shown in fig. 1, a space utilization measuring and calculating method based on machine learning includes the following steps:
Step 100: and shooting to obtain a plurality of empty images of the space to be tested when the goods are not stored.
In this step 100, the empty image is obtained by installing a camera in the space to be measured, and shooting the space to be measured in which the cargo is not stored with the camera. The installation position of the camera should specifically consider the size and shape of the space to be measured and whether the space to be measured is an open space or a closed space, for example, the installation position of the camera may be front, rear, left, right, left front, right front, left rear or right rear of the space to be measured, and the installation position of the camera should be a little higher, so that the camera can capture the whole space to be measured to the greatest extent from above the space to be measured, and the camera should be at least capable of capturing the other side of the space to be measured opposite to the installation position thereof.
The idle image is preferably an RGB full-color image, and can be a single-channel image or a multi-channel image.
And preferably, the plurality of idle images are obtained by shooting the space to be detected by the camera under different illumination conditions. And shooting the space to be detected without the stored goods under different illumination conditions by the camera at the same installation position to obtain a plurality of empty images of the space to be detected under different illumination conditions, so that background recognition errors caused by different illumination conditions are eliminated in subsequent recognition training of image backgrounds.
For example, the camera can shoot the space to be detected in different time periods, and as the conditions of brightness, chromaticity, color temperature, contrast and the like of sunlight in different time periods are different, the idle image shot in different time periods can eliminate background recognition errors caused by different sunlight during background recognition training; the camera can shoot the space to be detected under different lighting devices, and the idle images shot under different lighting devices can eliminate background recognition errors caused by different lighting devices during background recognition training because the brightness, chromaticity, color temperature, contrast and other conditions of the different lighting devices are different; the camera can shoot the space to be detected under the state that the space to be detected is in a door opening state, a door closing state and a half-door opening state respectively, and as the brightness and shadow conditions of the space to be detected under the state that the space to be detected is in the door opening state, the door closing state and the half-door opening state are different, the idle image shot and obtained under the state that the space to be detected is in different door opening states can eliminate background recognition errors caused by different door opening and closing states of the space to be detected during background recognition training.
In order to improve the accuracy of the image background recognition training as much as possible, at least the idle images of the space to be detected under seven different illumination conditions should be contained in the plurality of idle images.
Step 200: and preprocessing all acquired idle images to form a training data set.
In this step 200, the training dataset includes a plurality of linked list nodes, one linked list node corresponding to each empty image, one of the linked list nodes corresponding to each pixel in the empty image, each node including a parameter value and a label corresponding to the pixel, the parameter value including a position parameter and a gray scale parameter, the label being a background.
The position parameters refer to the lateral position and the longitudinal position of each pixel point in the empty image, and if a rectangular coordinate system is established by taking the lateral direction of the empty image as an X axis and the longitudinal direction as a Y axis, the position parameters of each pixel point can be expressed as (X, Y) in the rectangular coordinate system. The gray scale parameters include gray scale values of all channels in the idle image, such as gray scale values of R channel, gray scale values of G channel, and gray scale values of B channel.
Specifically, as shown in fig. 2, in step 200, the steps of preprocessing all the acquired idle images are as follows:
Step 210: and traversing all pixel points in the single empty image, and extracting parameter values of all pixel points in the single empty image.
Step 220: and establishing a background array, and adding the parameter values of all pixel points in the extracted single Zhang Kongzai image into the background array.
Step 230: and establishing a linked list node, adding the acquired parameter values in the background array into the linked list node, and adding a label for each node, wherein the label is a background.
Step 240: steps 210 to 230 are repeated until all empty images have corresponding linked list nodes established.
Step 300: and inputting the formed training data set into a machine learning algorithm to perform recognition training of the image background, and obtaining a background recognition model.
In this step 300, the machine learning algorithm is preferably used to perform image background recognition training in a multithreading parallel manner, so as to improve training efficiency.
Step 400: and shooting and acquiring at least one cargo image of the space to be tested when the cargo is stored.
In this step 400, the cargo image is obtained by installing a camera in the space to be tested, and the space to be tested in which the cargo is not stored is photographed by using the camera, and the camera that photographs the cargo image is the same camera as possible as the camera that photographs the empty image, so as to avoid a background recognition error due to a parameter difference of the camera. The installation position of the camera for shooting the cargo image is the same as the installation position of the camera for shooting the empty image as far as possible, so that the background recognition error caused by different installation positions of the cameras is avoided. It will be appreciated that the cargo image and the empty image should all be identical except for the presence or absence of cargo, although the lighting conditions may be slightly different.
The cargo image is preferably an RGB full-color image, and can be a single-channel image or a multi-channel image.
Step 500: all acquired cargo images are preprocessed to form a test data set.
In this step 500, the test data set includes a plurality of cargo arrays, one cargo array corresponding to each cargo image, the cargo arrays including parameter values corresponding to all pixels in the cargo image, the parameter values including a position parameter and a gray scale parameter.
The position parameters refer to the lateral position and the longitudinal position of each pixel point in the cargo image, and if a rectangular coordinate system is established by taking the lateral direction of the cargo image as an X axis and the longitudinal direction as a Y axis, the position parameters of each pixel point can be expressed as (X, Y) in the rectangular coordinate system. The gray scale parameters include gray scale values of all channels in the cargo image, such as the gray scale value of the R channel, the gray scale value of the G channel, and the gray scale value of the B channel.
Specifically, as shown in fig. 3, in step 500, the steps of preprocessing all acquired cargo images are as follows:
step 510: traversing all pixel points in the single cargo image, and extracting parameter values of all pixel points in the single cargo image;
step 520: establishing a cargo array, and adding the parameter values of all pixel points in the extracted single Zhang Zaihuo image into the cargo array;
step 530: steps 510-520 are repeated until all cargo images have a corresponding cargo array established.
Step 600: and inputting the formed test data set into the background recognition model to recognize the image background, and separating the cargo areas in all cargo images.
In this step 600, the machine learning algorithm includes a KNN algorithm by which background recognition errors due to camera shake can be eliminated.
Specifically, as shown in fig. 4, in step 600, the step of inputting the formed test data set into the background recognition model to recognize the image background and separating the cargo areas in all cargo images is as follows:
step 610: and setting a D value and a K value of the KNN algorithm.
Step 620: a full 255 mask map is created for a single Zhang Zaihuo image.
In this step 620, the size of the full 255 mask map is the same as the size of the cargo image and the empty image, and the gray scale parameters of all pixels are 255.
Step 630: and extracting all adjacent pixel points, the Euclidean distance of which is within the D value, of a pixel point of the single cargo image in all the empty images.
In this step 630, if the camera does not shake during shooting, the position parameters of each pixel point of the camera in the cargo image and the empty image are the same, and if the camera shakes during shooting, the position parameters of each pixel point of the camera in the cargo image and the empty image are different.
Assuming that the position parameter of a certain pixel point of the cargo image is (X, Y), the abscissa range of the pixel point in all adjacent pixel points in the empty image is between [ X-Dcos theta, X+ Dcos theta ] and the ordinate range is between [ Y-Dsin theta, Y+Dsin theta ], wherein D is a D value set in the KNN algorithm, theta is between 0 and 360 degrees, namely the adjacent pixel points extracted from all the empty image fall in the same circular area, and the circular area takes the position parameter (X1, Y1) as a circle center and takes the D value set in the KNN algorithm as a radius.
Specifically, the position parameter of a certain pixel point in the cargo array corresponding to the cargo image is matched with the position parameters of all nodes in the linked list node, so that the gray scale parameters of all adjacent pixel points with Euclidean distances within a D value with the pixel point are extracted from the linked list node.
Step 640: and calculating the gray level difference value of the pixel point and the adjacent pixel points in all the empty images.
In step 640, the gray level differences of all channels between the pixel and all neighboring pixels need to be calculated, and the gray level differences of all channels are averaged to obtain a final gray level difference.
Step 650: k gray scale differences with the smallest difference are extracted.
In step 650, the calculated gray level differences between the pixel point and all neighboring pixel points are ranked, and K gray level differences with the smallest difference are extracted according to a preset K value.
Step 660: judging whether the K gray level difference values are lower than a preset gray level clustering threshold value or not, and counting the times of the K gray level difference values lower than the gray level clustering threshold value.
In step 660, the extracted K gray differences are compared with the gray cluster threshold, and it is further determined whether the K gray differences are lower than the gray cluster threshold, and the number of times of being lower than the gray cluster threshold.
Step 670: judging whether the number of times lower than the gray level clustering threshold value in the K gray level difference values exceeds a preset number of times threshold value, if yes, judging the pixel point as a background, and if not, judging the pixel point as goods.
Step 680: and repeating the steps 630 to 670 until all the pixels which are judged to be cargoes in the single Zhang Zaihuo image are obtained, and setting the gray scale parameters of all the pixels which have the same position parameters as the pixels which are judged to be the background in the 255 mask map to 0 from 255 to obtain the cargo area in the single cargo image.
In this step 680, the image generated via the full 255 mask map includes a region having a gray scale parameter of 0 and a region having a gray scale parameter of 255, wherein the region having a gray scale parameter of 0 corresponds to the background region of the cargo image and the region having a gray scale parameter of 255 corresponds to the cargo region of the cargo image.
Step 690: steps 620 through 680 are repeated until cargo areas in all cargo images are obtained.
Step 700: and measuring and calculating the space utilization rate of the space to be measured according to the separated cargo area and the empty load image.
In this step 700, the space utilization rate η= (m/a) ×γ 2 ×100% of the space to be measured, where m is the number of pixels in the cargo area, a is the number of pixels in the empty image, and γ is the correction parameter of imaging.
If the cargo image is only one, directly calculating the pixel point ratio between the cargo area in the cargo image and the empty image through the formula to serve as the space utilization rate of the space to be detected; if the cargo area has two or more than two cargo areas, calculating pixel point ratios between the cargo area in each cargo image and the empty image according to the formula, and then calculating an average value of the pixel point ratios as the space utilization rate of the space to be detected.
The imaging correction parameter gamma is related to the parameters of the camera, the size of the space to be measured and the installation position of the camera in the space to be measured, and can be obtained by shooting and calibrating the space to be measured through the camera after the installation and parameter setting of the camera are completed in the space to be measured.
Example two
As shown in fig. 5, a space utilization measuring and calculating system based on machine learning is used for implementing the space utilization measuring and calculating method according to the first embodiment; the space utilization measuring and calculating system comprises a shooting module, a first preprocessing module, a second preprocessing module, a machine learning module and a measuring and calculating module, wherein
The shooting module is used for shooting and acquiring a plurality of empty images and at least one cargo image of the space to be detected when the cargo is not stored and the cargo is stored respectively;
The first preprocessing module is used for preprocessing all acquired idle images to form a training data set;
The second preprocessing module is used for preprocessing all acquired cargo images to form a test data set;
The machine learning module is used for inputting the formed training data set into a machine learning algorithm to perform recognition training of the image background, obtaining a background recognition model, inputting the formed test data set into the background recognition model to recognize the image background, and separating cargo areas in all cargo images;
the measuring and calculating module is used for measuring and calculating the space utilization rate of the space to be measured according to the separated cargo area and the empty image.
Finally, it should be noted that the foregoing embodiments are merely for illustrating the technical solution of the embodiments of the present invention and are not intended to limit the embodiments of the present invention, and although the embodiments of the present invention have been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the embodiments of the present invention may be modified or replaced with the same, and the modified or replaced technical solution may not deviate from the scope of the technical solution of the embodiments of the present invention.

Claims (6)

1. The space utilization rate measuring and calculating method based on machine learning is characterized by comprising the following steps of:
step 100: shooting to obtain a plurality of empty images of the space to be tested when the goods are not stored;
Step 200: preprocessing all acquired idle images to form a training data set;
Step 300: inputting the formed training data set into a machine learning algorithm to perform recognition training of image background, and obtaining a background recognition model;
step 400: shooting and obtaining at least one cargo image of the space to be tested when goods are stored;
Step 500: preprocessing all acquired cargo images to form a test data set;
step 600: inputting the formed test data set into the background recognition model to recognize image background, and separating cargo areas in all cargo images;
Step 700: according to the separated cargo area and the empty load image, measuring and calculating the space utilization rate of the space to be measured;
The training data set comprises a plurality of linked list nodes, one linked list node corresponds to an empty image, one node in the linked list nodes corresponds to a pixel point in the empty image, each node comprises a parameter value and a label of the corresponding pixel point, the parameter value comprises a position parameter and a gray scale parameter, and the label is a background; in step 200, the steps of preprocessing all acquired empty images are as follows:
Step 210: traversing all pixel points in a single empty image, and extracting parameter values of all pixel points in the single empty image;
Step 220: establishing a background array, and adding the parameter values of all pixel points in the extracted single Zhang Kongzai image into the background array;
step 230: establishing a linked list node, adding the acquired parameter values in the background array into the linked list node, and adding a label for each node, wherein the label is a background;
step 240: repeating steps 210 to 230 until all the empty images are established with corresponding linked list nodes;
The machine learning algorithm comprises a KNN algorithm; in step 600, the step of inputting the formed test dataset into the background recognition model to recognize the image background and separating out cargo areas in all cargo images is as follows:
Step 610: setting a D value and a K value of the KNN algorithm;
step 620: creating a full 255 mask map for the single Zhang Zaihuo images;
step 630: extracting all adjacent pixel points, the Euclidean distance between the adjacent pixel points and a certain pixel point of a single cargo image, in all the empty images is within the D value;
step 640: calculating the gray difference value between the pixel point and the adjacent pixel points in all empty images;
step 650: extracting K gray level difference values with minimum difference values;
step 660: judging whether the K gray difference values are lower than a preset gray clustering threshold value or not, and counting the times of the K gray difference values lower than the gray clustering threshold value;
Step 670: judging whether the number of times lower than the gray clustering threshold value in the K gray difference values exceeds a preset number of times threshold value, if yes, judging the pixel point as a background, and if not, judging the pixel point as goods;
Step 680: repeating steps 630 to 670 until all the pixels which are judged to be cargoes in the single Zhang Zaihuo image are obtained, and setting the gray scale parameters of all the pixels which have the same position parameters as the pixels which are judged to be the background in the 255 mask map to 0 from 255 to obtain the cargo area in the single cargo image;
Step 690: steps 620 through 680 are repeated until cargo areas in all cargo images are obtained.
2. The machine learning based space utilization measurement method of claim 1, wherein the test dataset includes a plurality of cargo arrays, one cargo array corresponding to each cargo image, the cargo arrays including parameter values corresponding to all pixels in the cargo image, the parameter values including a position parameter and a gray scale parameter.
3. The machine learning based space utilization measurement and calculation method of claim 2 wherein in step 500, the step of preprocessing all acquired cargo images is as follows:
step 510: traversing all pixel points in the single cargo image, and extracting parameter values of all pixel points in the single cargo image;
step 520: establishing a cargo array, and adding the parameter values of all pixel points in the extracted single Zhang Zaihuo image into the cargo array;
step 530: steps 510-520 are repeated until all cargo images have a corresponding cargo array established.
4. The machine learning based space utilization measurement and calculation method of claim 1, wherein in step 300, the machine learning algorithm is trained for image background recognition in a multi-threaded parallel manner.
5. The machine learning based space utilization measurement method according to claim 1, wherein in step 700, the space utilization η= (m/a) ×γ 2 ×100% of the space to be measured, where m is the number of pixels of the cargo area, a is the number of pixels of the empty image, and γ is a correction parameter of imaging.
6. A space utilization measuring and calculating system based on machine learning, which is characterized by being used for realizing the space utilization measuring and calculating method according to claim 1; the space utilization measuring and calculating system comprises a shooting module, a first preprocessing module, a second preprocessing module, a machine learning module and a measuring and calculating module, wherein
The shooting module is used for shooting and acquiring a plurality of empty images and at least one cargo image of the space to be detected when the cargo is not stored and the cargo is stored respectively;
The first preprocessing module is used for preprocessing all acquired idle images to form a training data set;
The second preprocessing module is used for preprocessing all acquired cargo images to form a test data set;
The machine learning module is used for inputting the formed training data set into a machine learning algorithm to perform recognition training of the image background, obtaining a background recognition model, inputting the formed test data set into the background recognition model to recognize the image background, and separating cargo areas in all cargo images;
the measuring and calculating module is used for measuring and calculating the space utilization rate of the space to be measured according to the separated cargo area and the empty image.
CN202410269739.6A 2024-03-11 2024-03-11 Space utilization rate measuring and calculating method and system based on machine learning Active CN117893891B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410269739.6A CN117893891B (en) 2024-03-11 2024-03-11 Space utilization rate measuring and calculating method and system based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410269739.6A CN117893891B (en) 2024-03-11 2024-03-11 Space utilization rate measuring and calculating method and system based on machine learning

Publications (2)

Publication Number Publication Date
CN117893891A CN117893891A (en) 2024-04-16
CN117893891B true CN117893891B (en) 2024-05-17

Family

ID=90649543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410269739.6A Active CN117893891B (en) 2024-03-11 2024-03-11 Space utilization rate measuring and calculating method and system based on machine learning

Country Status (1)

Country Link
CN (1) CN117893891B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111415106A (en) * 2020-04-29 2020-07-14 上海东普信息科技有限公司 Truck loading rate identification method, device, equipment and storage medium
CN112037177A (en) * 2020-08-07 2020-12-04 浙江大华技术股份有限公司 Method and device for evaluating carriage loading rate and storage medium
CN112446402A (en) * 2019-09-03 2021-03-05 顺丰科技有限公司 Loading rate identification method and device, computer equipment and storage medium
CN114022537A (en) * 2021-10-29 2022-02-08 浙江东鼎电子股份有限公司 Vehicle loading rate and unbalance loading rate analysis method for dynamic weighing area
CN114463697A (en) * 2022-01-25 2022-05-10 润建股份有限公司 Loading rate calculation method based on image recognition

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL224896A (en) * 2013-02-25 2017-09-28 Agent Video Intelligence Ltd Foreground extraction technique

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446402A (en) * 2019-09-03 2021-03-05 顺丰科技有限公司 Loading rate identification method and device, computer equipment and storage medium
CN111415106A (en) * 2020-04-29 2020-07-14 上海东普信息科技有限公司 Truck loading rate identification method, device, equipment and storage medium
CN112037177A (en) * 2020-08-07 2020-12-04 浙江大华技术股份有限公司 Method and device for evaluating carriage loading rate and storage medium
CN114022537A (en) * 2021-10-29 2022-02-08 浙江东鼎电子股份有限公司 Vehicle loading rate and unbalance loading rate analysis method for dynamic weighing area
CN114463697A (en) * 2022-01-25 2022-05-10 润建股份有限公司 Loading rate calculation method based on image recognition

Also Published As

Publication number Publication date
CN117893891A (en) 2024-04-16

Similar Documents

Publication Publication Date Title
US8218857B2 (en) Color-coded target, color code extracting device, and three-dimensional measuring system
CN111626139A (en) Accurate detection method for fault information of IT equipment in machine room
CN108234824A (en) Shadow correction detection parameters determine, correct detection method and device, storage medium, fisheye camera
CN109520706A (en) Automobile fuse box assembly detection system, image-recognizing method and screw hole positioning mode
CN111932504A (en) Sub-pixel positioning method and device based on edge contour information
CN112837381B (en) Calibration method, system and equipment for camera suitable for driving equipment
CN117893891B (en) Space utilization rate measuring and calculating method and system based on machine learning
CN110740314B (en) Method and system for correcting defective pixel of color line array camera
CN115760704A (en) Paint surface defect detection method and device based on color active reference object and medium
CN102486376A (en) Image different position annotation system and method
CN116573366B (en) Belt deviation detection method, system, equipment and storage medium based on vision
CN106934792A (en) A kind of 3D effect detection method of display module, apparatus and system
CN111009003A (en) Method, system and storage medium for correcting deviation of traffic signal lamp
CN108961213B (en) Equipment for realizing batch inspection of drilling quality and detection method thereof
CN116128853A (en) Production line assembly detection method, system, computer and readable storage medium
CN109919973A (en) Multi-angle of view target association method, system and medium based on multiple features combining
JP5590387B2 (en) Color target position determination device
CN110059101B (en) Vehicle data searching system and method based on image recognition
CN112036393A (en) Identification method based on shale gas field production single-pointer meter reading
CN112614191A (en) Loading and unloading position detection method, device and system based on binocular depth camera
CN111292261A (en) Container detection and locking method based on multi-sensor fusion
CN112037273A (en) Depth information acquisition method and device, readable storage medium and computer equipment
CN109862348A (en) A kind of picture system parsing power test judgement method and device
CN117649564B (en) Aircraft cabin assembly deviation recognition device and quantitative evaluation method
CN114049606B (en) Feature continuity-based adhesive tape edge detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant