CN115578475A - Image storage method, device, readable medium and equipment - Google Patents

Image storage method, device, readable medium and equipment Download PDF

Info

Publication number
CN115578475A
CN115578475A CN202211272653.6A CN202211272653A CN115578475A CN 115578475 A CN115578475 A CN 115578475A CN 202211272653 A CN202211272653 A CN 202211272653A CN 115578475 A CN115578475 A CN 115578475A
Authority
CN
China
Prior art keywords
label
layer
image
value
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211272653.6A
Other languages
Chinese (zh)
Inventor
耿立帅
周超
沈小勇
吕江波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Smartmore Technology Co Ltd
Original Assignee
Shenzhen Smartmore Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Smartmore Technology Co Ltd filed Critical Shenzhen Smartmore Technology Co Ltd
Priority to CN202211272653.6A priority Critical patent/CN115578475A/en
Publication of CN115578475A publication Critical patent/CN115578475A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection

Abstract

The invention provides a method, a device, a readable medium and equipment for storing an image, wherein the method comprises the steps of obtaining a label layer of a target image; determining position area information of each label and a label type of each label in a label layer according to the characteristic information corresponding to each label type; acquiring a first image of each label from the label layer; the first image of the label is an image in a position area of the label in the label layer; for each label in the label layer, performing compression coding on the labeled first image to obtain a compressed code of the label; the compressed codes of all labels, the label types of all labels, the position area information of all labels and the information of the original background image layer in the label image layer are correspondingly stored, and because only the compressed codes of the labels, the label types of the labels and the position area information of the labels are stored in the application, but not the whole label image layer, the occupied space of a memory is reduced.

Description

Image storage method, device, readable medium and equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, a readable medium, and a device for storing an image.
Background
In the field of defect detection, users are often required to use pictures of parts, machines and the like needing defect detection as original background layers, and then defect labeling is performed on labeling layers corresponding to the original background layers to obtain labeled pictures. Wherein, the picture includes after the mark: the original background image layer and the label image layer are consistent in size. In the prior art, after the marked picture is obtained, the marked picture is usually stored on a server. Specifically, in order to ensure that the marked picture is not distorted, the server side can respectively perform lossless compression on the original background layer and the marked layer, and then correspondingly store the original background layer and the marked layer which are subjected to lossless compression on the file server.
However, the original background layer and the label layer after lossless compression still occupy a large memory space, and when more and more labeled pictures need to be stored in the server, the file server may have a situation of insufficient memory space, which is difficult to satisfy the storage requirements of a large number of labeled pictures.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, a readable medium, and a device for storing an image.
In a first aspect, an embodiment of the present application discloses an image storage method, which is applied to a server, and the image storage method includes:
acquiring a labeling layer of a target image; wherein the target image comprises: an original background layer and the label layer; the marking layer is used for displaying marks made by users on the defects in the original background layer;
determining position area information of each label and a label type of each label in the label layer according to the characteristic information corresponding to each label type;
acquiring a first image of each label from the label layer; the first image of the label is an image in the position area of the label in the label layer;
for each label in the label layer, performing compression coding on the labeled first image to obtain a compressed code of the label;
correspondingly storing the compressed code of each label in the label layer, the label type of each label, the position area information of each label and the information of the original background layer; and the information of the original background image layer is used for acquiring the original background image layer.
Optionally, in the above method for storing an image, the compressed code of the label is a plurality of N-bit data corresponding to the label; wherein N is a positive integer; bit positions in the plurality of N-bit data corresponding to the labels correspond to pixel points in the labeled first image one by one; and the value of each bit in the N bits of data is set according to the pixel value of the pixel point corresponding to the bit.
Optionally, in the method for storing an image, the compressing and encoding the labeled first image for each label in the label layer to obtain a compressed code of the label includes:
carrying out binarization processing on the labeled first image aiming at each label in the label layer to obtain a labeled second image;
determining the total number M of the N-bit data corresponding to the label according to the total pixel point number of the labeled second image; wherein M is a positive integer;
constructing M initial N-bit data corresponding to the labels; the value of each bit of the initial N-bit data is a first specific value;
for each pixel point in the labeled second image, if the pixel value of the pixel point is equal to a second specific value, determining N-bit data of the pixel point in M initial N-bit data and a bit corresponding to the pixel point in the N-bit data according to the serial number of the pixel point, and assigning the bit corresponding to the pixel point in the N-bit data to be the second specific value;
and taking the M assigned N-bit data corresponding to the label as the compression coding of the label.
Optionally, in the above method for storing an image, the compressing and encoding the first image of the annotation for each annotation in the annotation layer to obtain a compressed code of the annotation includes:
carrying out binarization processing on the labeled first image aiming at each label in the label layer to obtain a labeled second image;
respectively representing a plurality of pixel points with continuous positions and equal pixel values in the labeled second image as specific data; wherein the value of the specific data is determined according to the number of the represented pixel points and the pixel values of the represented pixel points;
and all the specific data used for representing the label is taken as the compression coding of the label.
Optionally, in the method for storing an image, the representing, as one specific datum, a plurality of pixel points that are located consecutively and have equal pixel values in the labeled second image includes:
traversing all pixel points of the labeled second image;
if the pixel value of the traversed pixel point is equal to the previous pixel point, adding one to the value of the counter; wherein an initial value of the counter is zero;
if the pixel value of the traversed pixel point is not equal to the previous pixel point and the value of the previous pixel point is a first specific value, multiplying the value of the counter by a third specific value to obtain specific data, storing the obtained specific data into a queue corresponding to the label, and adding one after the value of the counter is cleared;
if the pixel value of the traversed pixel point is not equal to the previous pixel point and the value of the previous pixel point is a second specific value, multiplying the value of the counter by a fourth specific value to obtain specific data, storing the obtained specific data into a queue corresponding to the label, and adding one after the value of the counter is cleared;
wherein the compression encoding of the annotation using all the specific data representing the annotation comprises:
and taking all the specific data stored in the queue corresponding to the label as the compression code of the label.
Optionally, in the storage method of the image, the compressed code of each label in the label layer, the label type of each label, the location area information of each label, and the information of the original background layer are correspondingly stored; after the information of the original background image layer is used to obtain the original background image layer, the method further includes:
receiving an acquisition request of a target image sent by a client;
sending the path of the original background layer of the target image, the compressed code of each label in the label layer, the label type of each label, and the position area information of each label to the client, so that the client acquires the original background layer from a file server through the path of the original background layer, and generates a label layer through the compressed code of each label, the label type of each label, and the position area information of each label in the label layer; the original background image layer is stored in the file server, and the compressed code of each label, the label type of each label and the position area information of each label in the label image layer are stored in a database of the server.
Optionally, in the method for storing an image, the determining, according to feature information corresponding to each annotation type, position area information of each annotation in the annotation layer and an annotation type of each annotation includes:
for each labeling type, screening all pixel points belonging to the labeling type from the labeling layer according to the characteristic information of the labeling type;
determining each connected domain belonging to the label type in the label layer and the position information of the connected domain according to all the screened pixel points belonging to the label type; and the position information of the connected domain is the marked position area information.
In a second aspect, an embodiment of the present application discloses an image storage device, which is applied to a server, and the image storage device includes:
the first acquisition unit is used for acquiring a label layer of a target image; wherein the target image comprises: an original background layer and the label layer; the marking layer is used for displaying marks made by users on the defects in the original background layer;
the determining unit is used for determining the position area information of each label and the label type of each label in the label layer according to the characteristic information corresponding to each label type;
the second acquisition unit is used for acquiring the first image of each label from the label layer; the first image of the label is an image in the position area of the label in the label layer;
a compressing unit, configured to perform compression coding on the labeled first image for each label in the label layer to obtain a compression code of the label;
the storage unit is used for correspondingly storing the compressed codes of each label, the label type of each label, the position area information of each label and the information of the original background layer in the label layer; and the information of the original background image layer is used for acquiring the original background image layer.
In a third aspect, an embodiment of the present application discloses a computer readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method according to any one of the first aspect.
In a fourth aspect, an embodiment of the present application discloses an image storage device, including:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as in any one of the first aspects above.
Based on the image storage method provided by the embodiment of the present invention, after the annotation layer of the target image is obtained, according to the feature information corresponding to each annotation type, the position area information of each annotation in the annotation layer and the annotation type of each annotation are determined, and then the first image of each annotation can be obtained from the annotation layer. The first image of the label is the image in the position area of the label in the label layer. And then, carrying out compression coding on the labeled first image to obtain the labeled compression coding. Finally, the target image can be stored only by correspondingly storing the compression code of each label in the label layer, the label type of each label, the position area information of each label and the information of the original background layer. The compressed codes of each label in the stored label layer are only a small partial image area of the whole label layer before being uncompressed, so that the space occupied by storage is greatly reduced, and the compressed codes of each label, the label type of each label and the position area information of each label in the corresponding stored label layer can be restored to generate the label layer, so that the storage of the label layer is not distorted, and the server can meet the storage requirements of a large number of target images under the condition of ensuring that the target images are not distorted.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image storage method according to an embodiment of the present application;
fig. 2 is a schematic composition diagram of a target image according to an embodiment of the present disclosure;
fig. 3 is a schematic view of a flow of acquiring a label layer of a target image according to an embodiment of the present application;
fig. 4 is a schematic flow chart illustrating a process of determining position area information of each label and a label type of each label in a label layer according to feature information corresponding to each label type according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating a method for obtaining a compressed code for each tag according to an embodiment of the present disclosure;
FIG. 6 is a schematic flowchart of another method for obtaining compression coding of each label according to an embodiment of the present application;
fig. 7 is a schematic flowchart illustrating a process of representing a plurality of pixels as specific data according to an embodiment of the present application;
fig. 8 is a schematic flowchart of a process for a client to obtain a target image according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an image storage device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
Referring to fig. 1, an embodiment of the present application provides an image storage method, which is applied to a server (also may be understood as a backend), and the image storage method specifically includes the following steps:
s101, obtaining an annotation layer of a target image, wherein the target image comprises: the device comprises an original background layer and a labeling layer, wherein the labeling layer is used for displaying a label made by a user on a defect in the original background layer.
The target image in the embodiment of the present application may be understood as the labeled image mentioned in the foregoing background. The target image is obtained by combining an original background layer and a label layer, and the label layer is used for displaying labels made by users to defects in the original background layer, so that various labels on the original background layer can be displayed on the target image. And the marked position is the position of the defect on the original background image layer. And the original background layer and the label layer have the same size. The original background image layer can be understood as an original picture of the defect to be detected, and can be, for example, a part diagram, a building structure diagram, a mechanical diagram of a machine, and the like, including but not limited to pictures related to the industrial field. And superposing the labeling layer on the original background layer to obtain a target image.
For example, as shown in fig. 2, the original background layer 201 shown in fig. 2 is a spring washer diagram, a crack defect 2011 is seen on the spring washer from the original background layer 201, and the marking layer 202 is painted with a black thick pen at the position of the crack defect 2011 to make a mark. And superposing the label layer 202 on the original background layer 201 to form a target image 203, wherein the position of the defect on the spring washer can be seen through a black-colored bold label on the target image 203.
Optionally, referring to fig. 3, one embodiment of performing step S101 includes the following steps:
s301, the client sends an acquisition request of the original background layer to the server.
When a user needs to detect the defect of the original background image layer, an acquisition request of the original background image layer is sent to a server side through a client side. Specifically, the request for obtaining the original background layer may include representation information of the original background layer, such as a name and a serial number of the original background layer, which is not limited in this embodiment of the application. The obtaining request of the original background layer is used for requesting to obtain the original background layer.
And S302, the server returns the path of the original background image layer to the client.
Specifically, in order not to affect the operating efficiency of the server, all the original background layers may be stored in the file server, and the path of the original background layer may be understood as a storage path in which the original background layer is stored in the file server.
And S303, the client acquires the original background image layer from the file server through the path of the original background image layer.
Illustratively, the client sends the path of the original background layer to the file server to request to obtain the original background layer. The file server locally obtains the original background image layer through the path of the original background image layer, and then returns the original background image layer to the client.
Optionally, the file server may store the original background layer in a manner of compression coding of the original background layer, return to the client may be the compression coding of the original background layer, and then the client obtains the original background layer by decompressing the compression coding of the original background layer. The compression coding of the original background image layer is obtained by performing compression coding on the original background image layer. There are many specific compression encoding methods, for example, lossless compression may be used, and the embodiments of the present application are not limited thereto.
It should be noted that steps S301 to S303 are only one of the manners for the client to obtain the original background image layer, and there are many manners for obtaining the original background image layer, for example, when the original background image layer is stored locally in the client, the original background image layer may be directly obtained from the client locally, or after the client sends an obtaining request of the original background image layer to the server, the server directly returns the original background image layer to the client. The method and the device for acquiring the original background image layer by the client are not limited.
S304, the client creates a labeling layer corresponding to the original background layer, and labels defects in the original background layer on the labeling layer to obtain a target image, wherein the target image comprises the original background layer and the labeling layer.
The label layer corresponding to the original background layer refers to a layer which has the same size as the original background layer and is covered on the original background layer. The initial state of the label layer may be a blank layer, and a user labels a defect in the original background layer on the label layer through a client, so that a label is displayed on the label layer at a position of the defect in the original background layer, and the label displayed on the label layer is superimposed with the original background layer to form an original background layer (i.e., a target image) with multiple labels, for example, the target image 203 shown in fig. 2.
Alternatively, for different types of defects, distinction can be made by using labels with different characteristics. For example, when the defect is a crack, a black drawing tool is used for marking. When the defect is stain, a drawing tool with blue color is used for marking, and different marking types can be distinguished according to the color characteristics. Wherein the annotation type can be understood as a defect type. Alternatively, in addition to distinguishing different label types according to color characteristics, the label types may also be distinguished according to other characteristics such as shape characteristics. The drawing tool may be a brush, a curved line, etc., which is not limited in the embodiments of the present application.
For example, the process of executing step S304 may be: a user creates a labeling layer corresponding to an original background layer through some software related to defect labeling installed on a client, and then labels defects in the original background layer on the labeling layer by using a drawing tool provided by the software. After the user finishes labeling, the original background image layer and the corresponding labeling image layer are overlapped together to form a target image.
S305, the client returns the annotation layer of the target image to the server.
After obtaining the target image, the user needs to save the target image. And when the client corresponds to the operation of storing the target image by the user, the client returns the label image layer of the target image to the server. In step S302, the original background layer of the target image is stored in the server side, so that the client may only return to the label layer for saving without returning to the original background layer of the target image. In other embodiments, the client may also return the target image to the server, and the server acquires the annotation image layer to be stored from the target image.
It should be noted that, when the user wants to readjust (also may be understood as updating) the annotation layer of the edited target image, the client may send an acquisition request of the target image to the server, and then after the client acquires the annotation layer of the target image, the client edits the annotation layer again to obtain the updated target image. The updated target image comprises an original background image layer and an updated labeling image layer. When the subsequent client stores the updated target image, only the updated label layer needs to be returned to the server, and then the server stores the updated label layer through the process shown in fig. 1.
It should be further noted that there are many implementation manners for the server to obtain the annotation layer of the target image, including but not limited to what is provided in the examples of the present application.
S102, according to the characteristic information corresponding to each label type, determining the position area information of each label and the label type of each label in the label layer.
After the server side obtains the annotation layer, it needs to determine where the user made the annotation on the annotation layer, and what the type of the annotation to which the made annotation belongs. The annotation type in the embodiment of the present application may be understood as a defect type. As can be seen from the foregoing description, the original background layer has a plurality of defect types, and in order to distinguish the labels of different label types, when a user labels on the label layer, the labels of different label types have different feature information. I.e. there is a one-to-one correspondence between the label type and the characteristic information. Such as a one-to-one correspondence between label type and color characteristics. For details, reference may be made to the foregoing description related to the labeling types, and details are not described herein again.
Different label types have different characteristic information, for example, if the label types are distinguished by colors, different label types have different color characteristics. The correspondence between the annotation type and the feature information may be obtained from the client in advance before executing step S102. Specifically, when the user labels the label layer on the client, the feature information corresponding to different label types is predefined. For example, when the mark type is defined as crack, the mark of the blue color feature is used. When the annotation type is stain, an annotation of a black color feature is used. After defining the feature information corresponding to each label type, the user sends the feature information corresponding to each label type to the server, and then the server can determine the position area information of each label and the label type of each label in the label layer according to the feature information corresponding to each label type when executing step S102.
Optionally, in other embodiments, when the feature information corresponding to each annotation type defined by the user in the client is changed, the client sends the latest feature information corresponding to each annotation type to the server, so that the server can accurately determine the location area information of each annotation in the annotation layer and the annotation type of each annotation when executing step S102.
Specifically, the process of executing step S102 is: and aiming at the characteristic information corresponding to each label type, filtering out pixel points which accord with the characteristic information corresponding to the label type from the label layer, further determining the position area information of the label belonging to the label type according to the pixel points which accord with the characteristic information corresponding to the label type, and finally determining the position area information of each label and the label type of each label in the label layer.
The information of the labeled position area is expressed in many ways, for example, the coordinates of the top left corner vertex and the bottom right corner vertex in the labeled position area may be used. For example, the coordinates of all pixel points in the labeled location area may be used, which is not limited in this embodiment of the application.
Optionally, referring to fig. 4, in an embodiment of the present application, an implementation of step S102 is performed, including:
s401, aiming at each marking type, screening all pixel points belonging to the marking type from the marking layer according to the characteristic information of the marking type.
And aiming at each labeling type, screening all pixel points with the characteristic information of the labeling type from the labeling layer according to the characteristic information of the labeling type, and determining the pixel points as the pixel points belonging to the type.
For example, the feature information of a certain label type is a pixel value corresponding to a blue color. Then, the pixel points with the pixel values consistent with the pixel values corresponding to the blue colors can be screened out from the labeling layer, and the pixel points are determined to be the pixel points belonging to the labeling type.
Optionally, in a specific embodiment of the present application, before executing step S401, the method further includes: and carrying out graying processing on the labeling layer. When the feature information of the label type belongs to the information of the color feature, the pixel value of the pixel point in the label layer becomes a single channel after the graying processing is carried out on the label layer. And then when comparing whether the pixel value of the pixel point is consistent with the characteristic information of the label type, the comparison workload can be reduced. It should be noted that, when the labeling layer is grayed, the characteristic information of each labeling type also needs to be grayed, so as to ensure that the pixel points meeting the characteristic information of each labeling type are accurately filtered out. For example, the feature information of a certain labeling type is red, and the conversion of red into a gray value is 128, so that when a pixel point meeting the feature information of red is screened, a pixel point with a pixel value of 128 may be found in the labeling layer after the graying processing.
S402, according to all screened pixel points belonging to the label type, determining each connected domain belonging to the label type in the label layer and position information of the connected domain, wherein the position information of the connected domain is labeled position area information.
In a defect detection scene, most defects on an original background layer are presented in a concentrated pixel block form, so that after a user marks the defects on a marking layer in a smearing manner and the like, finally formed marks are also presented in a concentrated pixel block form. That is, in the label layer, all the pixels belonging to the same label have a communication relationship. Therefore, for each annotation type, each connected domain of all the pixel points belonging to the annotation type is found out from all the screened pixel points belonging to the annotation type, and each determined connected domain is a label belonging to the annotation type on the annotation layer. The position of the connected domain is the position of the label type. And the position information of the connected domain is the position area information of the label type.
And the position information of the connected domain is used for explaining the position of the connected domain in the labeling layer. Optionally, the position information of the connected component may be coordinates of an upper-left pixel point and coordinates of a lower-right pixel point of a minimum circumscribed rectangle of the connected component. The location information of the connected component may also be other information, and the embodiment of the present application does not limit the specific representation manner of the location information of the connected component.
S103, acquiring the first image of each label from the label layer, wherein the first image of each label is the image in the position area of the label in the label layer.
Since the position area information of each label in the label layer already exists in step S102, an image (i.e., a first image of each label) at the position area where each label is located may be obtained from the label layer according to the position area information of each label in the label layer.
For example, for each label in the label layer, an image in the label layer at the position area of the label is cut out from the label layer according to the position area information of the label, so as to obtain a first image of the label. For example, if the position area information of the callout in the foregoing step S102 is the position information of the minimum bounding rectangle of the connected component, the image at the minimum bounding rectangle of the plurality of connected components (i.e., the first image of each callout) on the callout layer may be acquired in step S103.
The first image of each annotation can be obtained in many ways, and besides being obtained by clipping from the annotation layer, the first image of each annotation can also be obtained by copying the image at the position area of the annotation in the annotation layer, which is not limited in the embodiment of the present application.
As can be seen from the foregoing description of step S103, the first image of each label occupies only a small portion of the label layer, and the area other than the image area where the label is located in the label layer does not have content containing information content, so that when the label layer is stored, only the first image of each label can be selected to be stored, compared with the memory occupied by the label layer, the memory occupied by all the first images of labels is small, and because the image area other than each label in the label layer does not have any content, the mode of only selecting the first image of each label to be stored does not cause distortion of the label layer, thereby greatly saving the storage space.
S104, compressing and coding the first image of the label aiming at each label in the label layer to obtain the compressed code of the label.
In the foregoing step S103, each labeled first image in the label layer is obtained, and in order to save a storage space when each labeled first image is subsequently stored, in this embodiment of the present application, compression coding is performed on each labeled first image, so as to obtain compression coding of each label. The annotated compressed code may then be restored to the annotated first image by decompression.
There are many ways of compression coding, for example, no compression, bitmap coding compression, continuity coding compression, etc. may be adopted. The embodiment of the present application does not limit the compression encoding method.
Compared with the mode of carrying out lossless compression coding on the whole label layer in the prior art, the mode of selecting the first image of each label in the label layer to carry out compression coding in the embodiment of the application occupies smaller storage space, and the content in the label layer is not lost in the storage mode.
Optionally, the first image of each label in step S104 may be a grayed first image, or may be an image that is not grayed, which is not limited in this embodiment of the application.
Optionally, in an embodiment of the present application, the labeled compressed code is labeled with a plurality of corresponding N-bit data. And marking the bit in the corresponding N-bit data and the pixel points in the marked first image in a one-to-one correspondence manner, wherein N is a positive integer, and the value of each bit in the N-bit data is set according to the pixel value of the pixel point corresponding to the bit.
Specifically, in the intN scenario, all data exists in the form of N-bit data. Similarly, the pixel values of the pixels in the labeled first image are all in the form of N-bit data. After bitmap compression coding is adopted, the pixel value of each pixel point in the labeled first image is compressed into one bit data to be represented, then continuous N pixel points are represented into one N-bit data, and the labeled first image is represented into a plurality of N-bit data corresponding to the label. Compared with the original labeled first image, the method has the advantages that the pixel value of one pixel point is represented by adopting the N-bit data, and the pixel value of one pixel point is represented by using one bit data in the plurality of N-bit data corresponding to the label obtained after compression coding, so that the occupied space of the memory is greatly saved.
Optionally, referring to fig. 5, in an embodiment of the present application, if the labeled compressed code is labeled with a plurality of N-bit data corresponding to the label, the process of executing step S104 may be:
s501, aiming at each label in the label layer, performing binarization processing on the labeled first image to obtain a labeled second image.
When the feature information of each label type belongs to the color feature information, since the label type of each label has already been clarified in the foregoing step S102, that is, what the feature information of each label is has already been determined, the color information may be discarded and the labeled first image may be subjected to binarization processing when the labeled first image is processed subsequently. The binarization processing is to convert the pixel value of the pixel point into 0 or 1. The annotated first image is binarized to obtain an annotated second image. In the labeled second image after the binarization processing, the pixel value of each pixel point is 0 or 1.
Although the color of the pixel point cannot be known through the pixel value after the binarization processing, since the label type of each label is already determined in the foregoing step S102, and there is a corresponding relationship between the label type and the color feature information, when the labeled first image is restored again, the color information in each labeled first image can still be restored. For example, a certain label type in the label layer is dirty, the feature information of the label type is blue, and the pixel value of the blue is a. And after the labeled first image is subjected to binarization processing, obtaining a labeled second image, wherein pixel points with the pixel value of a in the labeled first image are binarized into pixel points with the pixel value of 1. And when the marked first image needs to be restored again subsequently, restoring can be carried out according to the characteristic information of the marking type to which the mark belongs.
Optionally, before the labeled first image is subjected to binarization processing, graying processing may be performed on the labeled first image, so that a pixel value of a pixel point of the labeled first image is changed into a pixel value of a single channel, which is convenient for subsequent binarization operation.
It should be noted that the number of pixels of the labeled second image is the same as that of the labeled first image, and the difference is only that the pixel value is binarized.
S502, determining the total number M of the N-bit data corresponding to the label according to the total pixel point number of the labeled second image, wherein M is a positive integer.
The labeled N-bit data may be N-bit data representing the labeled second image. The total number M of the N-bit data corresponding to the label can be understood as the total number M required when the N-bit data is used to represent the labeled second image. Wherein M is a positive integer.
Specifically, the way of calculating the total number M may be: (1) And if the remainder exists by dividing the total pixel point number of the labeled second image by N, adding one to the first calculated value to obtain the total number M of the N-bit data corresponding to the label. Wherein the first calculation value is an integer. And if no remainder exists by dividing the total pixel point number of the labeled second image by N, directly determining the first calculation value as the total number M of the labeled corresponding N-bit data. (2) And dividing the total pixel point number of the labeled second image by N to obtain a quotient which is used as a first calculation value, and adding one to the first calculation value to obtain the total number M of the labeled corresponding N-bit data. The total number M of the N-bit data corresponding to the label can be obtained through calculation in the two modes.
Since the total number of pixel points of the annotated second image is consistent with the total number of pixel points of the annotated first image, the total number of pixel points of the annotated first image may be calculated after step S103 shown in fig. 1, and the total number of pixel points of the annotated first image (which is also the total number of pixel points of the annotated second image) is obtained. After step S501, the total number of pixel points of the annotated second image may be calculated to obtain the total number of pixel points of the annotated second image. The embodiment of the present application does not limit the manner of calculating the total number of pixel points of the annotated second image.
Illustratively, the total number of pixel points of the annotated second image is calculated by: when the marked position area information is the coordinates of the upper left corner and the lower right corner of the minimum circumscribed rectangle of the connected domain, the total pixel point number of the marked second image can be obtained by calculating the area formed by the upper left corner and the lower right corner.
S503, constructing M initial N-bit data marked correspondingly, wherein the value of each bit of the initial N-bit data is a first specific value.
The first specific value is one of the pixel values after the binarization processing. For example, in the labeled second image after the binarization processing, the pixel values of the pixel points are all 0 or 1, and then the first specific value may be 0 or 1. That is, the values of each bit of the initial N-bit data may both default to 0 or 1.
For example, the step S503 may be performed by constructing M buckets in advance, and each bucket may store N bits of data. The data on the initial N bits in the bucket are all the first specific value. Each bucket corresponds to an N-bit data. The M buckets have the serial numbers 0,1,2, … …, M-1.
S504, aiming at each pixel point in the labeled second image, whether the pixel value of the pixel point is equal to a second specific value or not is judged.
And the second specific value is one of the pixel values after the binarization processing. The second particular value is not equal to the aforementioned first particular value. For example, if the pixel values of the pixel points are all 0 or 1 after the binarization processing, if the first specific value is 0, the second specific value is 1, and if the first specific value is 1, the second specific value is 0.
For each pixel point in the labeled second image, when the pixel value of the pixel point is determined to be equal to the second specific value, it indicates that the pixel value of the pixel point is not equal to the bit value in the initial N-bit data, and steps S505 to S507 need to be performed, that is, the value of the corresponding bit of the pixel point in the N-bit data to which the pixel point belongs needs to be assigned. When the pixel value of the pixel point is judged not to be equal to the second specific value, the pixel value of the pixel point is equal to the first specific value and is equal to the bit value in the initial N-bit data, so that no operation can be executed, namely assignment is not required to be carried out on the corresponding bit of the pixel point in the N-bit data to which the pixel point belongs.
It should be noted that, in the process of executing step S504, each pixel point in the labeled second image may be sequentially traversed according to the serial number of the pixel point, and then it is determined whether the pixel value of the traversed pixel point is equal to the second specific value. The mode of traversing the pixel points can be column-by-column traversal or row-by-row traversal. Illustratively, the process of traversing the pixel points may be: after reading the labeled second image by using opencv, storing all the acquired pixel points in the labeled second image into a memory in a row mode, and traversing each pixel point stored in the memory. Optionally, it may also be determined whether the pixel values of the pixel points are equal to the second specific value respectively for all the pixel points in the labeled second image. The embodiment of the present application does not limit the manner of determining whether the pixel value of each pixel point is equal to the second specific value.
And S505, according to the serial number of the pixel point, determining N-bit data of the pixel point in the M initial N-bit data and corresponding bit of the pixel point in the N-bit data.
Specifically, the step S505 is executed to determine which bit of the N-bit data the pixel value of the pixel should be written into if the pixel value of the pixel is equal to the second specific value. The serial number of the pixel point can be understood as the position sequence of the pixel point in the second image. For example, the serial number of the first row and the first column of pixel points in the labeled second image is 0, the serial number of the second row and the first column of pixel points is 1, the serial number of the third row and the first column of pixel points is 2 … …, and so on, and the serial number of each pixel point is set.
Optionally, in a specific embodiment of the present application, one implementation manner of executing step S504 is: and dividing the serial number of the pixel point by N to obtain a quotient, and determining the quotient as the serial number of the N-bit data to which the pixel point belongs in the M initial N-bit data. And the serial number of the pixel point is left over to N (namely, modulo N is obtained), and the serial number of the bit corresponding to the pixel point in the N-bit data is obtained. For example, if N is 32 and the serial number of the pixel is 32, the quotient obtained by dividing the serial number of the pixel by N is 1, and thus the serial number of the N-bit data to which the pixel belongs is 1. The number of the pixel is left to be 0, so that the pixel has a bit with a sequence number of 0 in the N-bit data with a sequence number of 1. The serial numbers referred to in the examples of the present application are all sorted from 0.
For example, if the manner of performing step S503 is to construct M buckets in advance, the process of performing step S505 is: and determining the buckets to which the pixel points belong in the M buckets and the positions corresponding to the pixel points in the buckets according to the serial numbers of the pixel points. Specifically, a quotient obtained by dividing the serial number of the pixel point by N is determined as the serial number of the bucket to which the pixel point belongs among the M buckets. And the serial number of the pixel point is left over to N (namely, the modulus of the N is obtained), and the corresponding position of the pixel point in the bucket is obtained. For example, when the serial number of a pixel is 32, the quotient obtained by dividing the serial number of the pixel by N is 1, and thus the serial number of the bucket to which the pixel belongs is 1. The pixel's serial number gets the remainder of N to get 0, so the pixel is in the No. 0 bit in the bucket with serial number 1.
It should be noted that there are many ways to determine the N-bit data of the pixel point in the M initial N-bit data and determine the corresponding bit of the pixel point in the N-bit data, including but not limited to what is proposed in the embodiments of the present application.
S506, assigning the bit corresponding to the pixel point in the N-bit data to be a second specific value.
Specifically, the execution process of step S506 may be understood as: and aiming at each pixel point of which the pixel value in the labeled second image is the second specific value, assigning the bit corresponding to the pixel point in the N-bit data to which the pixel point belongs as the second specific value.
And S507, taking the M assigned N-bit data corresponding to the label as the compression coding of the label.
After the operations from step S504 to step S506 are performed on each pixel point in the labeled second image, the M assigned N-bit data corresponding to the label are obtained, and then the M assigned N-bit data corresponding to the label are used as the compression code of the label. Each bit in the labeled compressed code corresponds to each pixel in the labeled second image one by one, namely, the value of one bit is adopted to express the value of one pixel, so that the labeled second image is compressed.
Optionally, referring to fig. 6, in a specific embodiment of the present application, another implementation manner of executing step S104 is:
s601, aiming at each label in the label layer, carrying out binarization processing on the labeled first image to obtain a labeled second image.
The execution process and principle of step S601 may refer to step S501 shown in fig. 5, and are not described herein again.
S602, respectively representing a plurality of pixels with continuous positions and equal pixel values in the labeled second image as a specific datum, wherein the value of the specific datum is determined according to the number of the represented pixels and the pixel values of the represented pixels.
And expressing a plurality of pixel points with continuous positions and equal pixel values in the labeled second image into specific data. Wherein, if the current running environment is intN, the specific data may be a specific N-bit data. The value of the specific data is determined according to the number of the represented pixels and the pixel values of the represented pixels, in other words, according to the value of the specific data, the number of the represented pixels of the specific data and the pixel values of the represented pixels can be known.
Before uncompressed, the pixel value of each pixel in the labeled second image is represented by a datum. For example, in the intN operating environment, the pixel value of each pixel is represented by an N-bit data. After step S602 is executed, a plurality of pixels with continuous positions and equal pixel values originally needing to be represented by a plurality of N-bit data are finally represented by only one specific data, so that compression encoding is realized, and the storage space is reduced.
Alternatively, the process of performing step S602 may be: traversing each pixel point of the labeled second image according to a row or a column, and then sequentially representing a plurality of pixel points which are continuous in position and equal in pixel value in the traversed labeled second image as specific data.
Illustratively, the specific data is determined in the following manner: and if the pixel values of a plurality of pixel points with continuous positions and equal pixel values are the first specific value, determining the product of the continuous pixel point number and the third specific value as corresponding specific data. And if the pixel values of a plurality of pixel points with continuous positions and equal pixel values are the second specific value, determining the product of the continuous pixel point number and the fourth specific value as corresponding specific data. Wherein the third specific value and the fourth specific value can be arbitrarily set. For example, the third specific value is-1 and the fourth specific value is 1.
For example, when traversing the pixels in a row, if traversing the first row of the labeled second image, traversing 100 pixels with consecutive positions and pixel values of 1, and representing them as 100 × 1=100. Then, 10 pixel points with continuous positions and a pixel value of 0 are traversed again, and the pixel points are represented as 10 x-1 = -10. It should be noted that there are many ways to determine the specific data according to the number of the represented pixels and the pixel values of the represented pixels, including but not limited to the ways proposed in the embodiments of the present application.
Optionally, referring to fig. 7, in an embodiment of the present application, an implementation manner of performing step S602 includes:
s701, traversing all pixel points of the labeled second image.
The method for traversing all the pixel points of the labeled second image can refer to the aforementioned related content of the traversal pixel points, and is not described herein again.
S702, judging whether the pixel value of the traversed pixel point is equal to the previous pixel point.
And for each pixel point in the labeled second image, judging whether the pixel value of the traversed pixel point is equal to that of the previous pixel point, if so, regarding as a continuous pixel point with the same pixel value, and executing the step S703. If not, the pixel value of the pixel point is not continuously equal to the pixel value of the previous pixel point, and step S704 is executed.
And S703, adding one to the value of the counter, wherein the initial value of the counter is zero.
The counter is used for counting the number of the pixel points with continuous positions traversed at present and equal pixel values.
S704, judging whether the value of the previous pixel point is a first specific value.
If the value of the previous pixel is the first specific value, it indicates that the counted positions before the currently traversed pixel are continuous and the pixel values of the plurality of pixels are equal, and the pixel values of the plurality of pixels are the first specific value, so steps S705 to S706 are performed, that is, the specific data is determined in the manner of step S705.
If the value of the previous pixel is not the first specific value (i.e. the value of the previous pixel is the second specific value), it indicates that the counted positions before the currently traversed pixel are continuous and the pixel values of the pixels are equal, and the pixel values of the pixels are the second specific value, so steps S707 to S708 are executed, that is, the specific data is determined by adopting the manner of step S707.
S705, multiplying the value of the counter by a third specific value to obtain a specific data.
The third specific value can be arbitrarily set, and can be-1, for example. And multiplying the value of the counter by a third specific value to obtain specific data which is used for representing a plurality of continuous pixels with equal pixel values before the currently traversed pixel.
And S706, storing the obtained specific data into a queue corresponding to the label, clearing the value of the counter and adding one.
And the queue corresponding to the label is used for storing each calculated specific data according to the calculation sequence. Since the specific data calculated in step S705 is used to represent a plurality of continuous pixels with equal pixel values before the currently traversed pixel, and the current pixel needs to continue to use the counter to count, the value of the counter is cleared and then incremented by one, which represents that the number of continuous pixels with equal pixel values currently traversed is one.
And S707, multiplying the value of the counter by a fourth specific value to obtain specific data.
Step S707 is similar to the execution process and principle of step S705 described above, and is different only in that the value multiplied by the counter is different, and the fourth specific value may be arbitrarily set, but the fourth specific value is not equal to the third specific value.
And S708, storing the obtained specific data into a queue corresponding to the label, clearing the value of the counter and adding one.
The step S708 can be executed by referring to the step S706, which is not described herein again.
Through steps S702 to S708, all the specific data corresponding to the tag (i.e., the specific data representing the tag) are stored in the queue corresponding to the tag, and all the corresponding data corresponding to the tag are the compressed code of the tag.
S603, all the specific data for representing the label is used as the compression coding of the label.
I.e., the compressed encoding of the annotation is all of the specific data used to represent the annotation. Optionally, all the specific data may be sorted according to the precedence order of the pixel points for representation in the annotated second image.
And S105, correspondingly storing the compression code of each label in the label layer, the label type of each label, the position area information of each label and the information of the original background layer, wherein the information of the original background layer is used for acquiring the original background layer.
In step S105, the compressed code of each label, the label type of each label, the location area information of each label, and the information of the original background layer in the label layer are correspondingly stored, so that when a user needs to obtain a target image, the target image can be obtained through the compressed code of each label, the label type of each label, the location area information of each label, and the information of the original background layer in the label layer.
The information of the original background image layer may be used to obtain the original background image layer. For example, the original background layer may be a path of the original background layer. The original background image layer can be obtained from the file server through the path of the original background image layer. For example, the information of the original background layer may also be a compressed encoding of the original background layer. The compressed code of the original background image layer may be obtained by compressing (for example, lossless compression) the original background image layer, and the original background image layer may be obtained by decoding the compressed code of the original background image layer subsequently. The compressed code of each label in the label layer, the label type of each label, and the location area information of each label can be used to generate the label layer. Because the annotation type of the annotation corresponds to the characteristic information of the annotation, the first image of the annotation can be restored and generated according to the annotation type of the annotation and the compression coding of the annotation aiming at each annotation. Specifically, the process of generating the annotated first image may refer to the reverse process of the annotated compressed encoding obtained in step S101 to step S104, which is not described herein again. After the annotated first image is generated by restoring, the annotated first image is placed on the annotated position area in the annotation layer according to the annotated position area information, so that an annotation layer (i.e., the annotation layer obtained in the foregoing step S101) capable of displaying the annotation made by the user to the defect in the original background layer can be obtained.
Compared with the mode of storing the lossless compressed label layer in the prior art, the memory space occupied by the compressed codes of each label, the label type of each label and the position area information of each label in the label layer stored in the embodiment of the application for corresponding storage is smaller, the label layer generated by restoration is not influenced, and the storage requirements of a large number of labeled pictures can be further met.
Optionally, in a specific embodiment of the present application, the compressed code of each label in the label layer, the label type of each label, and the location area information of each label may be stored on the file server correspondingly.
Optionally, in another embodiment of the present application, the compressed code of each label in the label layer, the label type of each label, and the location area information of each label may be stored in the database correspondingly. For example, the compressed encoding of each annotation in the annotation layer, the annotation type of each annotation, and the location area information of each annotation can be persisted (i.e., saved) to a mysql database. For example, for each label in the label layer, the data structure for storing the relevant information of the label in the database may be { LeftTop: { int, int }, rightDown: { int, int }, labelType: str, data: int32 array }. Specifically, "LeftTop: { int, int } "is used to indicate the coordinates of the top left corner point of the smallest bounding rectangle of the connected component of the label (i.e., a part of the position region information of the label). "RightDown: { int, int } "is used to indicate the coordinates of the bottom right corner point of the minimum bounding rectangle of the connected component of the annotation (i.e., a part of the location area information of the annotation). "LabelType: str "is used to indicate the annotation type of the annotation," data: int32 array "refers to the compression encoding of the label. When the compressed code of each label, the label type of each label, and the location area information of each label in the label layer are correspondingly stored in the database, and a user requests to acquire a target image, the server can directly return the compressed code of each label, the label type of each label, and the location area information of each label in the label layer from the database to the client, and the client needs to acquire the label layer after lossless compression from the file layer through the path of the label layer, unlike the path required to return the label layer in the prior art. Compared with the prior art, the method and the device for obtaining the annotation layer reduce the number of call links for the mode that the server returns to the annotation layer of the client, and the mode that the client obtains the annotation layer is more convenient.
Optionally, referring to fig. 8, in an embodiment of the present application, if the information of the original background layer may be a path of the original background layer, after step S105 is executed, the method may further include:
s801, the client sends an acquisition request of the target image to the server.
The acquisition request of the target image is used for requesting to acquire the target image. The request for acquiring the target image at least includes identification information of the target image, for example, a name, a serial number, and the like of the target image.
S802, the server side sends the path of the original background layer of the target image, the compression code of each label in the label layer, the label type of each label and the position area information of each label to the client side.
And the original background image layer is stored in a file server. The original background image layer may be stored in the file server after being compressed, and the compression method is not limited in this embodiment of the application, and may be, for example, a lossless compression method. And the compressed code of each label in the label layer, the label type of each label and the position area information of each label are stored in a database of the server.
And aiming at the original background image layer stored on the file server, the server only returns the path of the original background image layer of the target image, and does not directly return to the original background image layer. And the subsequent client acquires the original background image layer from the file server through the path of the original background image layer. The compressed code of each label, the label type of each label and the position area information of each label in the label layer are stored in a database of the server, the server can directly acquire the compressed code of each label, the label type of each label and the position area information of each label in the label layer from the database, and directly return the compressed code of each label, the label type of each label and the position area information of each label in the label layer to the client. The client can directly restore and generate the label layer of the target image through the compressed codes of each label, the label type of each label and the position area information of each label in the label layer.
S803, the client generates the label layer according to the compression code of each label, the label type of each label and the position area information of each label in the label layer.
Because the corresponding relation between the label type and the characteristic information is predefined by a user, for each label, the characteristic information corresponding to the label type of the label and the compressed code of the label are combined, so that the labeled first image can be restored and generated, the position of the labeled first image on the label layer can be determined according to the position area information of the label, and the label layer can be generated.
S804, the client side obtains the original background image layer from the file server according to the path of the original background image layer.
Specifically, the client generates an obtaining request of the original background layer according to the path of the original background layer, where the obtaining request carries the path of the original background layer. The obtaining request of the original background layer is sent to a file server, the file server returns a compressed code of the original background layer (namely, the compressed code obtained after the original background layer is compressed), and then the client generates the original background layer according to the compressed code of the original background layer.
It should be noted that the order of executing step S803 and step S804 by the client does not affect the implementation of the embodiment of the present application.
In the embodiment of the application, the server stores the label layer by storing the compressed code of each label, the label type of each label, and the position area information of each label in the label layer, and the memory space occupied by the storage mode is far smaller than the mode of directly storing the compressed whole label layer, so that the compressed code of each label, the label type of each label, and the position area information of each label in the label layer can be stored in a file server only by storing the compressed code, the label type of each label, and the position area information of each label in a database. After the label layer is stored in the database, the calling links involved in the process of obtaining the label layer by the client are reduced, the client does not need to request for obtaining from the file server after receiving the path, but can directly receive the compressed code of each label, the label type of each label and the position area information of each label in the label layer returned by the server, and then the label layer is generated by restoring the compressed code of each label, the label type of each label and the position area information of each label in the label layer.
And S805, the client side obtains a target image according to the original background image layer and the label image layer.
After the client generates the original background image layer and the label image layer, the label image layer is superposed on the original background image layer to obtain a target image.
In the image storage method provided by the embodiment of the application, after the annotation layer of the target image is obtained, the position area information of each annotation and the annotation type of each annotation in the annotation layer are determined according to the feature information corresponding to each annotation type, and then the first image of each annotation can be obtained from the annotation layer. The first image of the label is the image in the position area of the label in the label layer. And then, carrying out compression coding on the labeled first image to obtain the labeled compression coding. Finally, the storage of the target image can be realized only by correspondingly storing the compression code of each label in the label layer, the label type of each label, the position area information of each label and the information of the original background layer, and the target image can be generated by utilizing the stored information subsequently. The compressed codes of each label in the stored label layer are only small part of image areas of the whole label layer before being compressed, so that the space occupied by storage is greatly reduced, and the label layer can be generated by restoring the compressed codes of each label, the label type of each label and the position area information of each label in the corresponding stored label layer, so that the storage of the label layer is not distorted, and the server can meet the storage requirements of a large number of target images under the condition of ensuring that the target images are not distorted.
Referring to fig. 9, based on the method for storing an image provided in the embodiment of the present application, the embodiment of the present application correspondingly discloses a storage device for an image, which is applied to a server, and specifically includes: a first acquisition unit 901, a determination unit 902, a second acquisition unit 903, a compression unit 904, and a storage unit 905.
A first obtaining unit 901, configured to obtain a label layer of a target image. Wherein the target image includes: an original background layer and a label layer. And the marking layer is used for displaying the marks made by the user on the defects in the original background layer.
The determining unit 902 is configured to determine, according to the feature information corresponding to each annotation type, position area information of each annotation in the annotation layer and an annotation type of each annotation.
A second obtaining unit 903, configured to obtain the first image of each annotation from the annotation layer. And the first image of the label is the image in the position area of the label in the label layer.
A compressing unit 904, configured to perform compression coding on the labeled first image for each label in the label layer, so as to obtain a compressed code of the label.
Optionally, in an embodiment of the present application, the labeled compressed code is labeled with a plurality of corresponding N-bit data. Wherein N is a positive integer. Labeling the bit in the corresponding N-bit data, and corresponding to the pixel points in the labeled first image one by one. The value of each bit in the plurality of N-bit data is set according to the pixel value of the pixel point corresponding to the bit.
Optionally, in a specific embodiment of the present application, the compressing unit 904 includes: the device comprises a first processing subunit, a first determining subunit, a building subunit, an assigning subunit and a second determining subunit.
And the first processing subunit is used for carrying out binarization processing on the labeled first image aiming at each label in the label layer to obtain a labeled second image.
And the first determining subunit is used for determining the total number M of the N-bit data corresponding to the label according to the total pixel point number of the labeled second image. Wherein M is a positive integer.
And the constructing subunit is used for constructing M initial N-bit data corresponding to the label. The value of each bit of the initial N-bit data is a first specific value.
And the assignment subunit is used for determining N-bit data of the pixel points in the M initial N-bit data and corresponding bits of the pixel points in the N-bit data according to the serial numbers of the pixel points if the pixel values of the pixel points in the labeled second image are equal to a second specific value, and assigning the corresponding bits of the pixel points in the N-bit data as the second specific value.
And the second determining subunit is used for taking the M assigned N-bit data corresponding to the label as the compression coding of the label.
Optionally, in a specific embodiment of the present application, the compressing unit 904 includes: the device comprises a second processing subunit, a third determining subunit and a fourth determining subunit.
And the second processing subunit is used for carrying out binarization processing on the labeled first image aiming at each label in the label layer to obtain a labeled second image.
And the third determining subunit is used for respectively representing a plurality of pixel points with continuous positions and equal pixel values in the labeled second image as specific data. Wherein the value of the specific data is determined according to the number of the represented pixel points and the pixel values of the represented pixel points.
Optionally, in a specific embodiment thereof, the third determining subunit includes: the traversal subunit, the first counting subunit, and the second calculating subunit.
And the traversing subunit is used for traversing all the pixel points of the labeled second image.
And the first counting subunit is used for adding one to the value of the counter if the pixel value of the traversed pixel point is equal to the previous pixel point. Wherein the initial value of the counter is zero.
And the first calculating subunit is used for multiplying the value of the counter by a third specific value to obtain specific data if the pixel value of the traversed pixel point is not equal to the previous pixel point and the value of the previous pixel point is a first specific value, storing the obtained specific data into a queue corresponding to the label, and adding one after the value of the counter is cleared.
And the second calculating subunit is used for multiplying the value of the counter by a fourth specific value to obtain specific data if the pixel value of the traversed pixel point is not equal to the previous pixel point and the value of the previous pixel point is a second specific value, storing the obtained specific data into a queue corresponding to the label, and adding one after the value of the counter is cleared. Wherein the fourth determining subunit includes: and the fifth determining subunit is used for taking all the specific data stored in the queue corresponding to the label as the compression coding of the label.
And a fourth determining subunit, configured to perform compression encoding on all the specific data representing the annotation as the annotation.
A storage unit 905, configured to correspondingly store the compression code of each label, the label type of each label, the location area information of each label, and the information of the original background layer in the label layer, where the information of the original background layer is used to obtain the original background layer.
Optionally, in a specific embodiment of the present application, the storage device for the image further includes: a receiving unit and a transmitting unit.
And the receiving unit is used for receiving an acquisition request of the target image sent by the client.
The sending unit is used for sending the path of the original background layer of the target image, the compressed code of each label in the label layer, the label type of each label and the position area information of each label to the client, so that the client can obtain the original background layer from the file server through the path of the original background layer and generate the label layer through the compressed code of each label, the label type of each label and the position area information of each label in the label layer. The original background image layer is stored in a file server, and the compression code of each label, the label type of each label and the position area information of each label in the label image layer are stored in a database of a server.
The principle and the execution process of each unit and subunit in the image storage device provided in the embodiment of the present application may refer to the foregoing image storage method provided in the embodiment of the present application, and are not described herein again.
In the image storage device provided in this embodiment of the application, after the first obtaining unit 901 obtains the annotation layer of the target image, the determining unit 902 determines, according to the feature information corresponding to each annotation type, the position area information of each annotation in the annotation layer and the annotation type of each annotation, and then the second obtaining unit 903 may obtain the first image of each annotation from the annotation layer. The first image of the label is the image in the position area of the label in the label layer. The compression unit 904 then performs compression encoding on the annotated first image to obtain an annotated compression encoding. The final storage unit 905 only needs to correspondingly store the compression code of each label in the label layer, the label type of each label, the position area information of each label, and the information of the original background layer, so that the target image can be stored. The storage unit 905 stores the compressed code of each label in the label layer, which is only a small partial image area of the whole label layer before uncompressed, so that the space occupied by storage is greatly reduced, and the compressed code of each label in the corresponding stored label layer, the label type of each label, and the position area information of each label can be restored to generate the label layer, so that the storage of the label layer is not distorted.
The embodiment of the present application also discloses a computer readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the image storage method according to any one of the above first aspects.
The embodiment of the application also discloses a storage device of the image, which comprises: one or more processors, a storage device, on which one or more programs are stored. The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of storing an image as described in any of the first aspects above.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. The image storage method is applied to a server side, and comprises the following steps:
acquiring a labeling layer of a target image; wherein the target image comprises: an original background layer and the label layer; the marking layer is used for displaying marks made by users on the defects in the original background layer;
determining position area information of each label and a label type of each label in the label layer according to the characteristic information corresponding to each label type;
acquiring a first image of each label from the label layer; the first image of the label is an image in the position area of the label in the label layer;
for each label in the label layer, performing compression coding on the labeled first image to obtain a compressed code of the label;
correspondingly storing the compressed code of each label in the label layer, the label type of each label, the position area information of each label and the information of the original background layer; and the information of the original background image layer is used for acquiring the original background image layer.
2. The method of claim 1, wherein the compressed encoding of the label is a plurality of N-bit data corresponding to the label; wherein N is a positive integer; bit positions in the plurality of N-bit data corresponding to the labels correspond to pixel points in the labeled first image one by one; and the value of each bit in the N-bit data is set according to the pixel value of the pixel point corresponding to the bit.
3. The method of claim 2, wherein the compression encoding the first image of the annotation for each of the annotations in the annotation layer to obtain the compression encoding of the annotation comprises:
carrying out binarization processing on the labeled first image aiming at each label in the label layer to obtain a labeled second image;
determining the total number M of the N-bit data corresponding to the label according to the total pixel point number of the labeled second image; wherein M is a positive integer;
constructing M initial N-bit data corresponding to the labels; the value of each bit of the initial N-bit data is a first specific value;
for each pixel point in the labeled second image, if the pixel value of the pixel point is equal to a second specific value, determining N-bit data of the pixel point in M initial N-bit data and a bit corresponding to the pixel point in the N-bit data according to the serial number of the pixel point, and assigning the bit corresponding to the pixel point in the N-bit data to be the second specific value;
and taking the M assigned N-bit data corresponding to the label as the compression coding of the label.
4. The method of claim 1, wherein the performing, for each of the annotations in the annotation layer, compression encoding the first image of the annotation to obtain compression encoding of the annotation comprises:
carrying out binarization processing on the labeled first image aiming at each label in the label layer to obtain a labeled second image;
respectively representing a plurality of pixel points with continuous positions and equal pixel values in the labeled second image as specific data; wherein the value of the specific data is determined according to the number of the represented pixel points and the pixel values of the represented pixel points;
and all the specific data used for representing the label is taken as the compression coding of the label.
5. The method according to claim 4, wherein the representing, as a specific datum, a plurality of pixels that are located consecutively and have equal pixel values in the labeled second image respectively comprises:
traversing all pixel points of the labeled second image;
if the pixel value of the traversed pixel point is equal to the previous pixel point, adding one to the value of the counter; wherein an initial value of the counter is zero;
if the pixel value of the traversed pixel point is not equal to the previous pixel point and the value of the previous pixel point is a first specific value, multiplying the value of the counter by a third specific value to obtain specific data, storing the obtained specific data into a queue corresponding to the label, and adding one after the value of the counter is cleared;
if the pixel value of the traversed pixel point is not equal to the previous pixel point and the value of the previous pixel point is a second specific value, multiplying the value of the counter by a fourth specific value to obtain specific data, storing the obtained specific data into a queue corresponding to the label, and adding one after the value of the counter is cleared;
wherein the compression encoding of the annotation using all the specific data representing the annotation comprises:
and taking all the specific data stored in the queue corresponding to the label as the compression code of the label.
6. The method according to claim 1, wherein the information of the original background layer is a path of the original background layer; after correspondingly storing the compressed code of each label in the label layer, the label type of each label, the location area information of each label, and the information of the original background layer, the method further includes:
receiving an acquisition request of a target image sent by a client;
sending the path of the original background layer of the target image, the compressed code of each label in the label layer, the label type of each label, and the position area information of each label to the client, so that the client acquires the original background layer from a file server through the path of the original background layer, and generates a label layer through the compressed code of each label, the label type of each label, and the position area information of each label in the label layer; the original background image layer is stored in the file server, and the compressed code of each label, the label type of each label and the position area information of each label in the label image layer are stored in a database of the server.
7. The method according to claim 1, wherein the determining, according to the feature information corresponding to each annotation type, the position area information of each annotation and the annotation type of each annotation in the annotation layer comprises:
for each marking type, screening all pixel points belonging to the marking type from the marking layer according to the characteristic information of the marking type;
determining each connected domain belonging to the label type in the label layer and the position information of the connected domain according to all the screened pixel points belonging to the label type; and the position information of the connected domain is the marked position area information.
8. An image storage device, applied to a server, the image storage device comprising:
the first acquisition unit is used for acquiring an annotation layer of a target image; wherein the target image comprises: an original background layer and the label layer; the marking layer is used for displaying marks made by users on the defects in the original background layer;
the determining unit is used for determining the position area information of each label and the label type of each label in the label layer according to the characteristic information corresponding to each label type;
the second acquisition unit is used for acquiring the first image of each label from the label layer; the first image of the label is an image in the position area of the label in the label layer;
a compressing unit, configured to perform compression coding on the labeled first image for each label in the label layer to obtain a compression code of the label;
the storage unit is used for correspondingly storing the compressed codes of each label, the label type of each label, the position area information of each label and the information of the original background layer in the label layer; and the information of the original background image layer is used for acquiring the original background image layer.
9. A computer-readable medium, characterized in that a computer program is stored thereon, wherein the program, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 7.
10. An image storage device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-7.
CN202211272653.6A 2022-10-18 2022-10-18 Image storage method, device, readable medium and equipment Pending CN115578475A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211272653.6A CN115578475A (en) 2022-10-18 2022-10-18 Image storage method, device, readable medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211272653.6A CN115578475A (en) 2022-10-18 2022-10-18 Image storage method, device, readable medium and equipment

Publications (1)

Publication Number Publication Date
CN115578475A true CN115578475A (en) 2023-01-06

Family

ID=84585469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211272653.6A Pending CN115578475A (en) 2022-10-18 2022-10-18 Image storage method, device, readable medium and equipment

Country Status (1)

Country Link
CN (1) CN115578475A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437602A (en) * 2023-12-21 2024-01-23 广州天奕技术股份有限公司 Dual-layer data calibration method, device, equipment and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437602A (en) * 2023-12-21 2024-01-23 广州天奕技术股份有限公司 Dual-layer data calibration method, device, equipment and readable storage medium
CN117437602B (en) * 2023-12-21 2024-03-22 广州天奕技术股份有限公司 Dual-layer data calibration method, device, equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN103400099B (en) Terminal and two-dimensional code identification method
CN1130919C (en) Apparatus for encoding contour of regions contained in video signal
CN103177249B (en) Image processing apparatus and image processing method
JP6045752B2 (en) Two-dimensional code, two-dimensional code analysis system, and two-dimensional code creation system
KR102423710B1 (en) Translucent image watermark detection
JP2004140764A (en) Image processing device and method therefor
EP2608104B1 (en) Image processing device, image processing method, and image processing program
CN109754046B (en) Two-dimensional code, encoding method, decoding method, device and equipment of two-dimensional code
CN115578475A (en) Image storage method, device, readable medium and equipment
US6621932B2 (en) Video image decoding and composing method and video image decoding and composing apparatus
CN100429921C (en) Raster image path architecture
CN111178445A (en) Image processing method and device
CN111145180A (en) Map tile processing method applied to large visual screen and related device
JP3514050B2 (en) Image processing device
CN112819694A (en) Video image splicing method and device
JP7156527B2 (en) Road surface inspection device, road surface inspection method, and program
AU2013211451A1 (en) Method of processing graphics with limited memory
CN111669477B (en) Image processing method, system, device, equipment and computer storage medium
JP3095071B2 (en) Pattern matching encoding apparatus and encoding method therefor
EP3750300A1 (en) Encoding dot patterns into printed images based on source pixel color
CN109146766B (en) Object selection method and device
CN113645484A (en) Data visualization accelerated rendering method based on graphic processor
JPH04287179A (en) Method and device for processing image information and image information transmitter
JP4446797B2 (en) Document restoration apparatus, document restoration method, document restoration program, and recording medium
CN111488752A (en) Two-dimensional code identification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination