CN113392673A - Image correction method, device, equipment and computer readable storage medium - Google Patents

Image correction method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN113392673A
CN113392673A CN202010168852.7A CN202010168852A CN113392673A CN 113392673 A CN113392673 A CN 113392673A CN 202010168852 A CN202010168852 A CN 202010168852A CN 113392673 A CN113392673 A CN 113392673A
Authority
CN
China
Prior art keywords
endpoint
image
end point
corrected
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010168852.7A
Other languages
Chinese (zh)
Inventor
连自锋
熊君君
张伟华
罗中华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SF Technology Co Ltd
SF Tech Co Ltd
Original Assignee
SF Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SF Technology Co Ltd filed Critical SF Technology Co Ltd
Priority to CN202010168852.7A priority Critical patent/CN113392673A/en
Publication of CN113392673A publication Critical patent/CN113392673A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof

Abstract

The embodiment of the application provides a method, a device and equipment for correcting an image and a computer readable storage medium, wherein the image is corrected through rotation, so that the identification precision of a commodity identification result of the image can be improved to a certain extent. The image correction method provided by the embodiment of the application comprises the following steps: acquiring an image to be corrected, wherein the image content of the image to be corrected comprises commodities and a goods shelf layer for bearing the commodities; identifying a left end point and a right end point of two ends of a shelf layer from an image to be corrected; detecting an included angle between a line segment and a horizontal line, wherein the line segment is composed of a left end point and a right end point; determining the rotation angle of the image to be corrected according to the included angle; and rotating the image to be corrected according to the rotation angle to obtain the corrected image.

Description

Image correction method, device, equipment and computer readable storage medium
Technical Field
The present application relates to the field of image recognition, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for correcting an image.
Background
In recent years, the construction of the internet of things is fierce, and the internet of things is rapidly permeating into various fields. Meanwhile, with the continuous improvement of labor cost, the unmanned supermarket becomes a development trend of Internet of things construction, traditional manual management is replaced by scientific and technological intelligence, and the supermarket is expected to become a new breakthrough in retail industry.
In the operation of the unmanned supermarket, the monitoring of commodities can not be carried out, and on the spot of the unmanned supermarket, a plurality of commodities in the supermarket can be monitored through the deployed camera and uploaded to a background server, so that rich data support is provided for Stock Keeping Unit (SKU) information of the commodities in real time. In this process, the server needs to perform commodity identification on the image acquired by the camera, and identify the commodity contained in the image.
The server can adopt the commodity identification model to identify the commodity, and therefore the identification precision of the commodity identification model plays an important role in the operation of the unmanned supermarket, and meanwhile, the identification result of the commodity identification model in the existing related technology is found in practical application, the precision is poor and still needs to be improved.
Disclosure of Invention
The embodiment of the application provides a method, a device and equipment for correcting an image and a computer readable storage medium, wherein the image is corrected through rotation, so that the recognition accuracy of a subsequent commodity recognition model on a commodity recognition result of the image can be improved to a certain extent.
In a first aspect, an embodiment of the present application provides a method for correcting an image, where the method includes:
acquiring an image to be corrected, wherein the image content of the image to be corrected comprises commodities and a goods shelf layer for bearing the commodities;
identifying a left end point and a right end point of two ends of a shelf layer from an image to be corrected;
detecting an included angle between a line segment and a horizontal line, wherein the line segment is composed of a left end point and a right end point;
determining the rotation angle of the image to be corrected according to the included angle;
and rotating the image to be corrected according to the rotation angle to obtain the corrected image.
In combination with the first aspect of this application embodiment, in a first possible implementation manner of this application embodiment, when the shelf layer includes a plurality of sub-shelf layers, the left end point includes a plurality of sub-left end points, the right end point includes a plurality of sub-right end points, and an included angle between the detection line segment and the horizontal line includes:
identifying respective corresponding endpoint groups of the sub-shelf layers from the point sets of the left endpoint and the right endpoint, wherein each endpoint group comprises a sub-left endpoint and a sub-right endpoint which belong to the same sub-shelf layer;
detecting a plurality of included angles between a line segment formed by the end point group and a horizontal line;
according to the included angle, determining the rotation angle of the image to be corrected comprises the following steps:
and taking the average value of the plurality of included angles as the rotation angle of the corrected image.
With reference to the first possible implementation manner of the first aspect of the embodiment of the present application, in a second possible implementation manner of the first aspect of the embodiment of the present application, identifying, from a point set of a left endpoint and a right endpoint, an endpoint group corresponding to each of the sub-shelf layers includes:
sequentially detecting the distance between any endpoint in the plurality of sub-left endpoints and any endpoint in the plurality of sub-right endpoints;
sequentially determining an endpoint which has the shortest distance to each endpoint in the plurality of sub-left endpoints from the plurality of sub-right endpoints, and sequentially determining an endpoint which has the shortest distance to each endpoint in the plurality of sub-right endpoints from the plurality of sub-left endpoints;
and summarizing the end point group with the shortest distance to be used as the corresponding end point group of each sub-shelf layer.
With reference to the second possible implementation manner of the first aspect of the embodiment of the present application, in a third possible implementation manner of the first aspect of the embodiment of the present application, the summarizing and obtaining the endpoint group with the shortest distance includes, as the endpoint group corresponding to each of the child shelf layers:
summarizing to obtain a first endpoint group with the shortest distance;
when repeated target endpoints exist in the first endpoint group, optimizing the first endpoint group, wherein the optimizing is used for eliminating endpoint groups except the shortest distance in a plurality of endpoint groups corresponding to the target endpoints;
and taking the first endpoint groups subjected to optimization processing as the endpoint groups corresponding to the sub-shelf layers respectively.
With reference to the second possible implementation manner of the first aspect of the embodiment of the present application, in a fourth possible implementation manner of the first aspect of the embodiment of the present application, the summarizing and obtaining the endpoint group with the shortest distance includes, as the endpoint groups corresponding to the respective child shelf layers:
summarizing to obtain a second endpoint group with the shortest distance;
when the second endpoint group has crossed target endpoint groups formed by the formed line segments, optimizing the second endpoint group, wherein the optimizing is used for eliminating the endpoint groups except the shortest distance in the target endpoint groups;
and taking the optimized second endpoint groups as the endpoint groups corresponding to the sub-shelf layers respectively.
With reference to the second possible implementation manner of the first aspect of the embodiment of the present application, in a fifth possible implementation manner of the first aspect of the embodiment of the present application, the method further includes:
acquiring an image marked with a plurality of end point groups corresponding to the shelf layers respectively;
taking the image marked with the end point groups corresponding to the shelf layers as a training set, and training a neural network model by combining loss functions, wherein the loss functions comprise an end point positioning loss function, an end point classification loss function and an end point grouping loss function;
identifying, from the set of points of the left endpoint and the right endpoint, the endpoint groups corresponding to the child shelf layers respectively comprises:
and identifying the corresponding endpoint groups of the sub-shelf layers from the point sets of the left endpoint and the right endpoint through the trained neural network model.
With reference to the first aspect of the embodiment of the present application, in a sixth possible implementation manner of the first aspect of the embodiment of the present application, after the image to be corrected is rotated according to the rotation angle, and a corrected image is obtained, the method further includes:
the corrected image is subjected to a product recognition process by the product recognition model, and a product included in the corrected image is recognized.
In a second aspect, an embodiment of the present application provides an apparatus for correcting an image, the apparatus including:
the device comprises an acquisition unit, a correction unit and a correction unit, wherein the acquisition unit is used for acquiring an image to be corrected, and the image content of the image to be corrected comprises commodities and a goods shelf layer for bearing the commodities;
the identification unit is used for identifying the left end point and the right end point of the two ends of the shelf layer from the image to be corrected;
the detection unit is used for detecting an included angle between a line segment and a horizontal line, wherein the line segment is composed of a left end point and a right end point;
the determining unit is used for determining the rotation angle of the image to be corrected according to the included angle;
and the rotating unit is used for rotating the image to be corrected according to the rotating angle to obtain the corrected image.
With reference to the second aspect of the embodiment of the present application, in a second possible implementation manner of the embodiment of the present application, when the shelf layer includes a plurality of sub-shelf layers, the left endpoint includes a plurality of sub-left endpoints, the right endpoint includes a plurality of sub-right endpoints, and the detecting unit is specifically configured to:
identifying respective corresponding endpoint groups of the sub-shelf layers from the point sets of the left endpoint and the right endpoint, wherein each endpoint group comprises a sub-left endpoint and a sub-right endpoint which belong to the same sub-shelf layer;
detecting a plurality of included angles between a line segment formed by the end point group and a horizontal line;
a determination unit, specifically configured to:
and taking the average value of the plurality of included angles as the rotation angle of the corrected image.
With reference to the first possible implementation manner of the second aspect of the embodiment of the present application, in the second possible implementation manner of the second aspect of the embodiment of the present application, the detecting unit is specifically configured to:
sequentially detecting the distance between any endpoint in the plurality of sub-left endpoints and any endpoint in the plurality of sub-right endpoints;
sequentially determining an endpoint which has the shortest distance to each endpoint in the plurality of sub-left endpoints from the plurality of sub-right endpoints, and sequentially determining an endpoint which has the shortest distance to each endpoint in the plurality of sub-right endpoints from the plurality of sub-left endpoints;
and summarizing the end point group with the shortest distance to be used as the corresponding end point group of each sub-shelf layer.
With reference to the second possible implementation manner of the second aspect of the embodiment of the present application, in a third possible implementation manner of the second aspect of the embodiment of the present application, the detecting unit is specifically configured to:
summarizing to obtain a first endpoint group with the shortest distance;
when repeated target endpoints exist in the first endpoint group, optimizing the first endpoint group, wherein the optimizing is used for eliminating endpoint groups except the shortest distance in a plurality of endpoint groups corresponding to the target endpoints;
and taking the first endpoint groups subjected to optimization processing as the endpoint groups corresponding to the sub-shelf layers respectively.
With reference to the second possible implementation manner of the second aspect of the embodiment of the present application, in a fourth possible implementation manner of the second aspect of the embodiment of the present application, the detecting unit is specifically configured to:
summarizing to obtain a second endpoint group with the shortest distance;
when the second endpoint group has crossed target endpoint groups formed by the formed line segments, optimizing the second endpoint group, wherein the optimizing is used for eliminating the endpoint groups except the shortest distance in the target endpoint groups;
and taking the optimized second endpoint groups as the endpoint groups corresponding to the sub-shelf layers respectively.
With reference to the second possible implementation manner of the second aspect of the embodiment of the present application, in a fifth possible implementation manner of the second aspect of the embodiment of the present application, the apparatus further includes a training unit, configured to:
acquiring an image marked with a plurality of end point groups corresponding to the shelf layers respectively;
taking the image marked with the end point groups corresponding to the shelf layers as a training set, and training a neural network model by combining loss functions, wherein the loss functions comprise an end point positioning loss function, an end point classification loss function and an end point grouping loss function;
an identification unit, specifically configured to:
and identifying the corresponding endpoint groups of the sub-shelf layers from the point sets of the left endpoint and the right endpoint through the trained neural network model.
With reference to the second aspect of the embodiment of the present application, in a sixth possible implementation manner of the second aspect of the embodiment of the present application, the apparatus further includes an application unit, configured to:
the corrected image is subjected to a product recognition process by the product recognition model, and a product included in the corrected image is recognized.
In a third aspect, an embodiment of the present application further provides an image correction apparatus, which includes a processor and a memory, where the memory stores a computer program, and the processor executes the steps in any one of the methods provided in the embodiments of the present application when calling the computer program in the memory.
In a fourth aspect, this application further provides a computer-readable storage medium, where a plurality of instructions are stored, and the instructions are adapted to be loaded by a processor to perform the steps in any one of the methods provided by this application.
As can be seen from the above, the embodiments of the present application have the following beneficial effects:
before commodity identification is carried out on a commodity image acquired from a field through a commodity identification model, preprocessing is carried out, namely the commodity image is rotated, the position of a commodity in the image is corrected, and then the commodity identification model can conveniently carry out commodity identification to a certain extent, so that the commodity identification precision of the commodity identification model is improved, and a commodity identification result with higher identification precision can be obtained.
In addition, the shelf layer and the left and right end points thereof are identified in a combined manner, and the inclination and the rotation angle thereof are determined according to the left and right end points, so that the application scene of commodity identification is combined, the processing of the image rotation angle is set to be more convenient and effective, the image correction method provided by the embodiment of the application is more convenient to apply to equipment such as a PDA (personal digital assistant) and the like for acquiring images on the spot in a supermarket, and the corrected images can be uploaded to a cloud server for a commodity identification model to perform commodity identification processing.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart illustrating an image correction method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flowchart of step S103 according to the corresponding embodiment of FIG. 1;
fig. 3 is a schematic flowchart of step S201 according to the corresponding embodiment of fig. 2 of the present application;
fig. 4 is a schematic flowchart of step S303 of fig. 3 according to the present application;
FIG. 5 is a schematic flowchart of step S303 of FIG. 3 according to another embodiment of the present application;
FIG. 6 is a schematic structural diagram of an image correction apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an image correction apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description that follows, specific embodiments of the present application will be described with reference to steps and symbols executed by one or more computers, unless otherwise indicated. Accordingly, these steps and operations will be referred to, several times, as being performed by a computer, the computer performing operations involving a processing unit of the computer in electronic signals representing data in a structured form. This operation transforms the data or maintains it at locations in the computer's memory system, which may be reconfigured or otherwise altered in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the application have been described in language specific to above, it is not intended to be limited to the specific form set forth herein, and it will be recognized by those of ordinary skill in the art that various of the steps and operations described below may be implemented in hardware.
The principles of the present application may be employed in numerous other general-purpose or special-purpose computing, communication environments or configurations. Examples of well known computing systems, environments, and configurations that may be suitable for use with the application include, but are not limited to, hand-held telephones, personal computers, servers, multiprocessor systems, microcomputer-based systems, mainframe-based computers, and distributed computing environments that include any of the above systems or devices.
The terms "first", "second", and "third", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions.
First, before describing the embodiments of the present application, the related contents of the embodiments of the present application with respect to the application context will be described.
In the related art, since an image collected on a supermarket field is affected by actual conditions such as limited field shooting space, limited shooting position or shooting technology of workers, the arrangement of commodities in the image often has certain distortion and inclination, and in this case, when the image is input to a commodity identification model for commodity identification based on an Artificial Intelligence (AI) technology, the difficulty of identification and the accuracy of identification are easily improved, and the application of the commodity identification model is disturbed.
Based on the above defects of the prior art, the embodiments of the present application provide a method for correcting an image, which overcomes the defects of the prior art to at least some extent.
In the image correction method provided in the embodiment of the present application, an execution main body of the image correction method may be an image correction apparatus, or different types of image correction apparatuses such as a server device, a physical host, or a User Equipment (UE) integrated with the image correction apparatus, where the image correction apparatus may be implemented in a hardware or software manner, and the UE may specifically be a terminal device such as a smart phone, a tablet computer, a notebook computer, a palm computer, a desktop computer, or a Personal Digital Assistant (PDA). The image correction device can be further divided into a plurality of devices, and the image correction method provided by the embodiment of the application is executed together.
Next, a method for correcting an image provided in an embodiment of the present application will be described.
Referring to fig. 1, fig. 1 shows a schematic flow chart of a method for correcting an image in an embodiment of the present application, and as shown in fig. 1, the method for correcting an image in an embodiment of the present application may specifically include the following steps:
step S101, obtaining an image to be corrected, wherein the image content of the image to be corrected comprises commodities and a goods shelf layer bearing the commodities;
step S102, identifying a left end point and a right end point at two ends of a shelf layer from an image to be corrected;
step S103, detecting an included angle between a line segment and a horizontal line, wherein the line segment is composed of a left end point and a right end point;
step S104, determining the rotation angle of the image to be corrected according to the included angle;
and step S105, rotating the image to be corrected according to the rotation angle to obtain the corrected image.
In the technical solution provided in the embodiment shown in fig. 1, before the commodity identification model identifies the commodity from the commodity image collected on site, preprocessing is performed, that is, the commodity image is rotated to correct the position of the commodity in the image, so that the commodity identification model can identify the commodity to a certain extent, thereby improving the commodity identification precision of the commodity identification model and obtaining a commodity identification result with higher identification precision.
In addition, the shelf layer and the left and right end points thereof are identified in a combined manner, and the inclination and the rotation angle thereof are determined according to the left and right end points, so that the application scene of commodity identification is combined, the processing of the image rotation angle is set to be more convenient and effective, the image correction method provided by the embodiment of the application is more convenient to apply to equipment such as a PDA (personal digital assistant) and the like for acquiring images on the spot in a supermarket, and the corrected images can be uploaded to a cloud server for a commodity identification model to perform commodity identification processing.
The following proceeds to a detailed description of the various steps of the embodiment shown in fig. 1:
in the embodiment of the application, the image to be corrected refers to an image obtained by shooting commodities on a supermarket, and the image can contain a shelf layer and commodities carried by the shelf layer. The acquisition of the image to be corrected can be immediately a shooting behavior of shooting commodities on the spot of a supermarket; or the behavior of the cloud server receiving the image to be corrected uploaded by the field device can also be understood as the behavior of the cloud server receiving the image to be corrected uploaded by the field device; or may be understood as a behavior of extracting an image to be corrected from a device or a memory storing the image to be corrected.
The identification of the left end point and the right end point at the two ends of the shelf layer can be realized based on the image identification of the shelf layer, and it is easy to understand that the shelf mainly comprises the shelf layer directly used for bearing the goods and other supporting structures, for example, a common shelf has 4 columns, and the multi-layer shelf layer is configured between the 4 columns to form a multi-layer structure. The goods shelf layer is often a regular figure in the image, generally rectangular or flat and slender square, so that compared with the commodity, the difficulty of identification is lower, the goods shelf layer and partial images at two ends of the goods shelf layer can be quickly identified, and in the identified image, end points at two ends of the upper edge of the goods shelf layer can be identified as left and right end points of the goods shelf layer; or, in the identified image, two ends of the lower edge of the goods shelf layer can be identified as the left end point and the right end point of the goods shelf layer; alternatively, in the recognized image, a center line dividing the shelf layer into upper and lower portions may be extracted, and both ends of the center line may be recognized as left and right end points of the shelf layer. The left end point and the right end point of each shelf layer may be specifically the upper edge, the lower edge or the middle of each shelf layer, and the like, and preferably the same position is adopted at the end portions, and after the shelf layers are identified, the left end point and the right end point of each shelf layer can be identified simply according to the preset end point positions. The left end point and the right end point of the identified shelf layer can exist in the form of pixels or vectors.
Taking a neural network model to identify the shelf layers and the left and right end points thereof as an example, images marked with end point groups corresponding to a plurality of shelf layers can be obtained in advance; and then, taking the image marked with the end point groups corresponding to the plurality of shelf layers as a training set, training a neural network model by combining a loss function, and identifying the shelf layers and the left and right end points thereof in the image to be corrected through the trained neural network model.
In practical applications, the shelf layer included in the image to be corrected may be only one layer or may be multiple layers, and in the case of including multiple shelf layers, an endpoint set including multiple endpoints may be identified, where the endpoint set includes multiple left and right endpoints, that is, the left endpoint includes multiple sub-left endpoints, and the right endpoint includes multiple sub-right endpoints.
Meanwhile, in an exemplary embodiment, as a specific implementation manner of the step S102, as shown in fig. 2, a flowchart of the step S103 in fig. 1 of the present application may include:
step S201, identifying the corresponding endpoint groups of the sub-shelf layers from the point set of the left endpoint and the right endpoint, wherein each endpoint group comprises a sub-left endpoint and a sub-right endpoint which belong to the same sub-shelf layer;
it should be understood that when a plurality of end points are preliminarily identified, the obtained point set is unprocessed and only contains basic information of the end points, such as corresponding pixels or vectors, the left and right end points can be simply divided according to the positions of the end points in the image to be corrected, and for a plurality of shelf layers, the corresponding end point group is further identified corresponding to each shelf layer.
It is understood that the end point group of each shelf layer can identify the corresponding relationship between the left and right end points and the shelf layer by combining the positions of the left and right end points in the image to be corrected and the image range of the shelf layer in the image to be corrected, and specifically, if a left end point and a right end point are both located in the image range of one shelf layer, the two end points can be identified as a group of end point groups, and the corresponding relationship between the end point group and the shelf layer is considered.
Or, starting from the end points themselves directly, it should be understood that the distance between the points is shorter for the left and right end points belonging to the same shelf layer, so that the end point group belonging to the same shelf layer can be identified by considering the connection mode and combining the distance between the points, and the image identification with higher complexity involved in the first identification mode is avoided.
Step S202, detecting a plurality of included angles between a line segment formed by the end point group and a horizontal line;
in the case where a plurality of end point groups are identified, a plurality of line segments may be constructed and the included angles between the line segments and the horizontal line may be detected.
Correspondingly, the process for determining the rotation angle in step S104 in the corresponding embodiment of fig. 1 may include:
and taking the average value of the plurality of included angles as the rotation angle of the corrected image.
It can be understood that, in the present application, the mean value of the plurality of angles obtained as described above can be directly used as the rotation angle, and the data processing is simple, where the rotation direction of the rotation angle is a direction of rotating toward the horizontal line.
It should be understood that in the embodiment of the present application, the rotation processing of the image is performed on the plane where the image to be corrected is located, and while ensuring a low data processing amount, the deformation influence of the product in the image to be corrected due to the complex image transformation can be avoided, so that the correction effect on the image to be corrected can be effectively achieved, and the recognition accuracy of the subsequent product recognition model can be improved.
Further, as a specific implementation manner of step S201 in the corresponding embodiment of fig. 2, referring to fig. 3, a flowchart of step S201 in the corresponding embodiment of fig. 2 in this application may include:
step S301, sequentially detecting the distance between any endpoint in the plurality of sub-left endpoints and any endpoint in the plurality of sub-right endpoints;
when the endpoint group is directly divided according to the endpoints, the dot spacing between different left and right endpoints is firstly calculated for the following data processing.
Step S302, sequentially determining an endpoint which has the shortest distance to each endpoint in the plurality of sub-left endpoints from the plurality of sub-right endpoints, and sequentially determining an endpoint which has the shortest distance to each endpoint in the plurality of sub-right endpoints from the plurality of sub-left endpoints;
it will be appreciated that in finding the dot spacing between any left and right end points, each end point and its end point on the other side where the shortest dot spacing is achieved may be determined in turn.
Step S303, the end point group with the shortest distance is obtained in a gathering mode and is used as the end point group corresponding to each sub-shelf layer.
The end points of the shortest distance between the two ends are obtained by combining the end points from the two sides respectively, so that the end point groups belonging to the same shelf layer can be matched.
Taking a set of data as an example, for the left side, the sub-left endpoints and the endpoint on the other side obtaining the shortest dot spacing are { (L)1,R2),(L2,R3),(L3,R4),(L4,R5) For the right side, the sub-right endpoints and the endpoint on the other side with the shortest dot spacing are { (R)2,L1),(R3,L2),(R4,L3),(R5,L4) The data on both sides are consistent, and then { (L) can be obtained1,R2),(L2,R3),(L3,R4),(L4,R5) As the identified set of endpoints.
Certainly, in practical applications, the identified end point group may have an error, and based on the actual situation that each shelf layer only corresponds to one group of left and right end points, optimization of the end point group may be performed, and the end point group with an error is removed and identified, so as to ensure the validity of the end point group, for example, two cases listed below:
first case
Fig. 4 is a schematic flowchart of step S303 in fig. 3 according to the embodiment of the present application, where in step S303 in the embodiment in fig. 3, optimization of an endpoint group may also be introduced, and as shown in fig. 4, the optimization may include:
step S401, summarizing and obtaining a first endpoint group with the shortest distance;
similarly to the above, when each endpoint on both sides and the first endpoint that obtains the shortest distance between the endpoints are obtained preliminarily, the aggregation may be performed, the endpoint groups that are fitted to each other are considered as the effective endpoint groups to be identified, and when the repeated endpoints occur, at least two endpoint groups corresponding to the endpoint may be identified with errors.
It will be readily appreciated that each shelf level necessarily corresponds to a set of left and right end points, and that even for reasons of inclination, the left and right end points of a shelf level will only be in the special case of one end point, and not in the case of two left end points or two right end points. Similarly, the same end point corresponds to one shelf layer, and the situation that the same end point belongs to two end point groups and corresponds to two shelf layers does not exist.
Step S402, when a repeated target endpoint exists in the first endpoint group, optimizing the first endpoint group, wherein the optimizing is used for eliminating endpoint groups except the shortest distance among a plurality of endpoint groups corresponding to the target endpoint;
for the repeated end points, the point distance between two end points in the end point group can be respectively extracted aiming at least two end point groups to which the end points belong, and only the end point group with the shortest distance is reserved.
For example, if one terminal a is connected to two terminals B, C, then the more distant connection lines are deleted according to the vector distances d (VA, VB) and d (VA, VC). That is, if d (VA, VB) > (VA, VC), the connection between the terminals a, B is deleted, whereas if d (VA, VB) > (VA, VC), the connection between the terminals A, C is deleted.
And step S403, taking the first endpoint group subjected to optimization processing as the endpoint group corresponding to each sub-shelf layer.
After the optimization processing is carried out, the end groups corresponding to the sub-shelf layers can be output.
Second case
Similar to the first case, fig. 5 shows another schematic flow diagram of step S303 in the embodiment corresponding to fig. 3 of the present application, and step S303 in the embodiment corresponding to fig. 3 may also include:
step S501, summarizing and obtaining a second endpoint group with the shortest distance;
step S501 and the following step S503 may refer to step S401 and step S403 with the numbers corresponding to those in fig. 4, which are not described herein again.
Step S502, when the second endpoint group has a target endpoint group which is crossed by the formed line segments, optimizing the second endpoint group, wherein the optimizing is used for eliminating the endpoint group which is beyond the shortest distance in the target endpoint group;
it is to be understood that in the case where the end point group identification abnormality is present, in addition to the case where one end point is present in two or more end point groups, there may be a case where straight lines formed by the identified two end point groups intersect.
In practical applications, the shelf layers are generally horizontally arranged, and if only the horizontally arranged shelf layers are considered, the identified line segments with multiple end point groups should not be intersected, so that the end point groups forming intersections from the line segments can be summarized, and only the end point group with the shortest line segment length, namely the shortest point distance is reserved.
For example, if the connection at the end point A, B crosses the connection at the end point C, D, the connection between the end points A, B is deleted if the distance d (VA, VB) > d (VC, VD), and the connection between the end points C, D is deleted otherwise.
And step S503, taking the optimized second endpoint group as the endpoint group corresponding to each sub-shelf layer.
It should be noted that, in practical applications, the optimization processing corresponding to the two cases in the above example may be selected to be executed, or may also be triggered to remove the endpoint group identified with the error, so as to ensure the validity of the endpoint group.
Through the processing, after the image to be corrected is corrected through rotation, the corrected image can be applied, and the subsequent commodity identification processing is continued. Correspondingly, after step S401 in the embodiment corresponding to fig. 1, the method for correcting an image provided in the embodiment of the present application may further include:
the corrected image is subjected to a product recognition process by the product recognition model, and a product included in the corrected image is recognized.
The commodity identification model can be an existing commodity identification model or a commodity identification model obtained by improving and optimizing the existing commodity identification model, and is used for carrying out commodity identification processing on an input image of a commodity to be identified and identifying the commodity contained in the image.
The commodity identification model is obtained by training a large number of images containing commodities and carrying commodity labels of the commodities, and specifically comprises the following steps: in the training process of the model, images which contain commodities and carry commodity labels of the commodities can be sequentially input into the initial neural network model to carry out forward propagation, the images can be obtained by manually adding labels, then a loss function is calculated according to the commodity recognition result output by the model, and backward propagation is carried out, so that parameter adjustment and optimization are carried out on the model, and the trained model is the commodity recognition model after repeated propagation and optimization.
It should be noted that, the whole or part of the image correction method provided in the embodiment of the present application may also be implemented as a neural network model, and the model establishment process may refer to the above contents, for example, step S102 in the embodiment corresponding to fig. 1, that is, the method may be implemented in combination with an endpoint recognition model.
Correspondingly, the method for correcting the image provided by the embodiment of the application can further comprise the establishing process of the endpoint recognition model:
acquiring an image marked with a plurality of end point groups corresponding to the shelf layers respectively;
taking the image marked with the end point groups corresponding to the shelf layers as a training set, and training a neural network model by combining loss functions, wherein the loss functions comprise an end point positioning loss function, an end point classification loss function and an end point grouping loss function;
it can be understood that the end point positioning loss function is used for indicating the accuracy degree of the end points at the two ends of the goods shelf layer identified by the model, so that the trainable model can accurately position the end points at the two ends of the goods shelf layer; the endpoint category loss function is used for indicating the accuracy of the left end and the right end which are identified by the model from the initial endpoint set so as to accurately distinguish the left end point from the right end point by the trainable model; the endpoint grouping loss function is used for indicating the accuracy degree of the endpoint groups identified by the model, so that the trainable model can accurately identify the endpoint groups, minimize the distance between the endpoints belonging to the same group and simultaneously maximize the distance between the endpoints respectively belonging to different groups.
Correspondingly, step S102 in the embodiment corresponding to fig. 1 may be:
and identifying the corresponding endpoint groups of the sub-shelf layers from the point sets of the left endpoint and the right endpoint through the trained neural network model.
It should be noted that, the models to which all or part of the steps of the image correction method provided in the embodiments of the present application are applied may be independent or embedded in the existing product recognition model, so that the recognition accuracy of the product recognition model is further improved by combining the functions of the image correction method provided in the embodiments of the present application.
Similarly, the model establishment according to the embodiment of the present application mentioned above may be embedded in the existing process of establishing the product identification model, so that the product identification model established thereby may also have functions that can be obtained by the image correction method provided in the embodiment of the present application.
In order to better implement the image correction method provided by the embodiment of the present application, the embodiment of the present application further provides an image correction apparatus.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an image correction device according to an embodiment of the present disclosure, in the embodiment of the present disclosure, the image correction device 600 may specifically include the following structures:
the acquiring unit 601 is configured to acquire an image to be corrected, where image content of the image to be corrected includes a commodity and a shelf layer for bearing the commodity;
an identifying unit 602, configured to identify a left end point and a right end point of two ends of the shelf layer from the image to be corrected;
a detecting unit 603, configured to detect an included angle between a line segment and a horizontal line, where the line segment is formed by a left end point and a right end point;
a determining unit 604, configured to determine a rotation angle of the image to be corrected according to the included angle;
a rotation unit 605, configured to rotate the image to be corrected according to the rotation angle, so as to obtain a corrected image.
In an exemplary embodiment, when the shelf layer includes a plurality of sub-shelf layers, the left end point includes a plurality of sub-left end points, and the right end point includes a plurality of sub-right end points, the detecting unit 603 is specifically configured to:
identifying respective corresponding endpoint groups of the sub-shelf layers from the point sets of the left endpoint and the right endpoint, wherein each endpoint group comprises a sub-left endpoint and a sub-right endpoint which belong to the same sub-shelf layer;
detecting a plurality of included angles between a line segment formed by the end point group and a horizontal line;
a determination unit, specifically configured to:
and taking the average value of the plurality of included angles as the rotation angle of the corrected image.
In another exemplary embodiment, the detecting unit 603 is specifically configured to:
sequentially detecting the distance between any endpoint in the plurality of sub-left endpoints and any endpoint in the plurality of sub-right endpoints;
sequentially determining an endpoint which has the shortest distance to each endpoint in the plurality of sub-left endpoints from the plurality of sub-right endpoints, and sequentially determining an endpoint which has the shortest distance to each endpoint in the plurality of sub-right endpoints from the plurality of sub-left endpoints;
and summarizing the end point group with the shortest distance to be used as the corresponding end point group of each sub-shelf layer.
In another exemplary embodiment, the detecting unit 603 is specifically configured to:
summarizing to obtain a first endpoint group with the shortest distance;
when repeated target endpoints exist in the first endpoint group, optimizing the first endpoint group, wherein the optimizing is used for eliminating endpoint groups except the shortest distance in a plurality of endpoint groups corresponding to the target endpoints;
and taking the first endpoint groups subjected to optimization processing as the endpoint groups corresponding to the sub-shelf layers respectively.
In another exemplary embodiment, the detecting unit 603 is specifically configured to:
summarizing to obtain a second endpoint group with the shortest distance;
when the second endpoint group has crossed target endpoint groups formed by the formed line segments, optimizing the second endpoint group, wherein the optimizing is used for eliminating the endpoint groups except the shortest distance in the target endpoint groups;
and taking the optimized second endpoint groups as the endpoint groups corresponding to the sub-shelf layers respectively.
In yet another exemplary embodiment, the apparatus further comprises a training unit 606 for:
acquiring an image marked with a plurality of end point groups corresponding to the shelf layers respectively;
taking the image marked with the end point groups corresponding to the shelf layers as a training set, and training a neural network model by combining loss functions, wherein the loss functions comprise an end point positioning loss function, an end point classification loss function and an end point grouping loss function;
the detecting unit 603 is specifically configured to:
and identifying the corresponding endpoint groups of the sub-shelf layers from the point sets of the left endpoint and the right endpoint through the trained neural network model.
In a further exemplary embodiment, the apparatus further comprises an application unit 607 for:
the corrected image is subjected to a product recognition process by the product recognition model, and a product included in the corrected image is recognized.
The embodiment of the present application further provides an image correction apparatus, and referring to fig. 7, fig. 7 is a schematic structural diagram of the image correction apparatus according to the embodiment of the present application, specifically, the image correction apparatus according to the present application includes a processor 701, where the processor 701 is configured to implement, when executing a computer program stored in a memory 702, each step of the image correction method according to any embodiment corresponding to fig. 1 to 6; alternatively, the processor 701 is configured to implement the functions of the units in the corresponding embodiment of fig. 6 when executing the computer program stored in the memory 702.
Illustratively, a computer program may be partitioned into one or more modules/units, which are stored in the memory 702 and executed by the processor 701 to accomplish the present application. One or more modules/units may be a series of computer program instruction segments capable of performing certain functions, the instruction segments being used to describe the execution of a computer program in a computer device.
The image correction device may include, but is not limited to, the processor 701 and the memory 702. Those skilled in the art will appreciate that the illustration is merely an example of the image correction device, and does not constitute a limitation of the image correction device, and may include more or less components than those shown, or combine some components, or different components, for example, the image correction device may further include an input-output device, a network access device, a bus, etc., and the processor 701, the memory 702, the input-output device, the network access device, etc., are connected via the bus.
The Processor 701 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the image correction apparatus, various interfaces and lines connecting the various parts of the entire apparatus.
The memory 702 may be used to store computer programs and/or modules, and the processor 701 may implement various functions of the computer apparatus by running or executing the computer programs and/or modules stored in the memory 702 and invoking data stored in the memory 702. The memory 702 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from use of a correction device for an image, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described image correction apparatus, device and corresponding units thereof may refer to the descriptions of the methods in any embodiments corresponding to fig. 1 to 6, and are not described herein again in detail.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer-readable storage medium, where a plurality of instructions are stored, and the instructions can be loaded by a processor to execute steps in the image correction method in any embodiment corresponding to fig. 1 to 6 in the present application, and specific operations may refer to descriptions of the image correction method in any embodiment corresponding to fig. 1 to 6, which are not described herein again.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in the image correction method in any embodiment corresponding to fig. 1 to 6 in the present application, the beneficial effects that can be achieved by the image correction method in any embodiment corresponding to fig. 1 to 6 in the present application can be achieved, for details, see the foregoing description, and are not repeated herein.
The method, the apparatus, the device and the computer-readable storage medium for image correction provided by the present application are described in detail above, and the principles and embodiments of the present application are explained herein by applying specific examples, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for correcting an image, the method comprising:
acquiring an image to be corrected, wherein the image content of the image to be corrected comprises commodities and a shelf layer bearing the commodities;
identifying a left end point and a right end point of two ends of the shelf layer from the image to be corrected;
detecting an included angle between a line segment and a horizontal line, wherein the line segment is composed of the left end point and the right end point;
determining the rotation angle of the image to be corrected according to the included angle;
and rotating the image to be corrected according to the rotation angle to obtain a corrected image.
2. The method of claim 1, wherein when the shelf level comprises a plurality of sub-shelf levels, the left end point comprises a plurality of sub-left end points, the right end point comprises a plurality of sub-right end points, and the detecting the angle between the line segment and the horizontal line comprises:
identifying respective corresponding endpoint groups of the sub-shelf layers from the point sets of the left endpoint and the right endpoint, wherein each endpoint group comprises a sub-left endpoint and a sub-right endpoint which belong to the same sub-shelf layer;
detecting a plurality of included angles between a line segment formed by the endpoint group and the horizontal line;
according to the included angle, determining the rotation angle of the image to be corrected comprises:
and taking the average value of the included angles as the rotation angle of the corrected image.
3. The method of claim 2, wherein the identifying, from the set of points of the left endpoint and the right endpoint, the respective corresponding endpoint groups of the sub-shelf layers comprises:
sequentially detecting the distance between any endpoint in the sub-left endpoints and any endpoint in the sub-right endpoints;
sequentially determining an endpoint which has the shortest distance to each endpoint in the plurality of sub-left endpoints from the plurality of sub-right endpoints, and sequentially determining an endpoint which has the shortest distance to each endpoint in the plurality of sub-right endpoints from the plurality of sub-left endpoints;
and summarizing the end point group with the shortest distance to be used as the end point group corresponding to each sub-shelf layer.
4. The method of claim 3, wherein the aggregating the shortest distance end point group as the end point group corresponding to each of the child shelf levels comprises:
summarizing to obtain a first endpoint group with the shortest distance;
when repeated target endpoints exist in the first endpoint group, optimizing the first endpoint group, wherein the optimizing is used for eliminating endpoint groups except the shortest distance acquired in a plurality of endpoint groups corresponding to the target endpoints;
and taking the first endpoint groups subjected to optimization processing as the endpoint groups corresponding to the sub-shelf layers respectively.
5. The method of claim 3, wherein the aggregating the shortest distance end point group as the end point group corresponding to each of the child shelf levels comprises:
summarizing the second endpoint group with the shortest distance;
when the second endpoint group has a crossed target endpoint group formed by the formed line segments, optimizing the second endpoint group, wherein the optimizing is used for eliminating the endpoint groups except the shortest distance in the target endpoint group;
and taking the second endpoint groups subjected to optimization processing as the endpoint groups corresponding to the sub-shelf layers respectively.
6. The method of claim 3, further comprising:
acquiring an image marked with a plurality of end point groups corresponding to the shelf layers respectively;
taking the image marked with the end point groups corresponding to the plurality of shelf layers as a training set, and training a neural network model by combining a loss function, wherein the loss function comprises an end point positioning loss function, an end point classification loss function and an end point grouping loss function;
the identifying, from the set of points of the left endpoint and the right endpoint, the respective corresponding endpoint groups of the child shelf layers comprises:
and identifying the corresponding endpoint groups of the sub-shelf layers from the point sets of the left endpoint and the right endpoint through the trained neural network model.
7. The method according to claim 1, wherein after the image to be corrected is rotated according to the rotation angle to obtain a corrected image, the method further comprises:
and performing commodity identification processing on the corrected image through a commodity identification model, and identifying the commodity contained in the corrected image.
8. A method for correcting an image, the method comprising:
the device comprises an acquisition unit, a correction unit and a correction unit, wherein the acquisition unit is used for acquiring an image to be corrected, and the image content of the image to be corrected comprises commodities and a shelf layer bearing the commodities;
the identification unit is used for identifying a left end point and a right end point of two ends of the shelf layer from the image to be corrected;
the detection unit is used for detecting an included angle between a line segment and a horizontal line, wherein the line segment is composed of the left end point and the right end point;
the determining unit is used for determining the rotation angle of the image to be corrected according to the included angle;
and the rotating unit is used for rotating the image to be corrected according to the rotating angle to obtain a corrected image.
9. An apparatus for correcting an image, comprising a processor and a memory, the memory having a computer program stored therein, the processor executing the method according to any one of claims 1 to 7 when calling the computer program in the memory.
10. A computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the method of any one of claims 1 to 7.
CN202010168852.7A 2020-03-12 2020-03-12 Image correction method, device, equipment and computer readable storage medium Pending CN113392673A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010168852.7A CN113392673A (en) 2020-03-12 2020-03-12 Image correction method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010168852.7A CN113392673A (en) 2020-03-12 2020-03-12 Image correction method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113392673A true CN113392673A (en) 2021-09-14

Family

ID=77615571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010168852.7A Pending CN113392673A (en) 2020-03-12 2020-03-12 Image correction method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113392673A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494892A (en) * 2022-04-15 2022-05-13 广州市玄武无线科技股份有限公司 Goods shelf commodity display information identification method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030147102A1 (en) * 2002-01-25 2003-08-07 Umax Data Systems Inc. Two-directions scanning method
CN102982332A (en) * 2012-09-29 2013-03-20 顾坚敏 Retail terminal goods shelf image intelligent analyzing system based on cloud processing method
CN107464136A (en) * 2017-07-25 2017-12-12 苏宁云商集团股份有限公司 A kind of merchandise display method and system
CN108256520A (en) * 2017-12-27 2018-07-06 中国科学院深圳先进技术研究院 A kind of method, terminal device and computer readable storage medium for identifying the coin time
CN110321769A (en) * 2019-03-25 2019-10-11 浙江工业大学 A kind of more size commodity on shelf detection methods

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030147102A1 (en) * 2002-01-25 2003-08-07 Umax Data Systems Inc. Two-directions scanning method
CN102982332A (en) * 2012-09-29 2013-03-20 顾坚敏 Retail terminal goods shelf image intelligent analyzing system based on cloud processing method
CN107464136A (en) * 2017-07-25 2017-12-12 苏宁云商集团股份有限公司 A kind of merchandise display method and system
CN108256520A (en) * 2017-12-27 2018-07-06 中国科学院深圳先进技术研究院 A kind of method, terminal device and computer readable storage medium for identifying the coin time
CN110321769A (en) * 2019-03-25 2019-10-11 浙江工业大学 A kind of more size commodity on shelf detection methods

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494892A (en) * 2022-04-15 2022-05-13 广州市玄武无线科技股份有限公司 Goods shelf commodity display information identification method, device, equipment and storage medium
CN114494892B (en) * 2022-04-15 2022-07-15 广州市玄武无线科技股份有限公司 Goods shelf commodity display information identification method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US10916021B2 (en) Visual target tracking method and apparatus based on deeply and densely connected neural network
US11120254B2 (en) Methods and apparatuses for determining hand three-dimensional data
US10762373B2 (en) Image recognition method and device
CN110378966B (en) Method, device and equipment for calibrating external parameters of vehicle-road coordination phase machine and storage medium
CN111815754A (en) Three-dimensional information determination method, three-dimensional information determination device and terminal equipment
CN110866497B (en) Robot positioning and mapping method and device based on dotted line feature fusion
CN112991178B (en) Image splicing method, device, equipment and medium
CN108228057B (en) Touch inflection point correction method and device and touch screen
CN111767965B (en) Image matching method and device, electronic equipment and storage medium
US20200005078A1 (en) Content aware forensic detection of image manipulations
CN108109176A (en) Articles detecting localization method, device and robot
CN110490190A (en) A kind of structured image character recognition method and system
US10600202B2 (en) Information processing device and method, and program
CN113112542A (en) Visual positioning method and device, electronic equipment and storage medium
CN113392673A (en) Image correction method, device, equipment and computer readable storage medium
CN111141208A (en) Parallel line detection method and device
CN113139905A (en) Image splicing method, device, equipment and medium
CN111368860B (en) Repositioning method and terminal equipment
CN113496139B (en) Method and apparatus for detecting objects from images and training object detection models
CN113361511A (en) Method, device and equipment for establishing correction model and computer readable storage medium
CN114882115A (en) Vehicle pose prediction method and device, electronic equipment and storage medium
CN111783180B (en) Drawing splitting method and related device
US20210073580A1 (en) Method and apparatus for obtaining product training images, and non-transitory computer-readable storage medium
CN111429399A (en) Straight line detection method and device
US11763543B2 (en) Method and device for identifying state, electronic device and computer -readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination