CN115063299B - Image preprocessing method and device, electronic equipment and storage medium - Google Patents
Image preprocessing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115063299B CN115063299B CN202210995819.0A CN202210995819A CN115063299B CN 115063299 B CN115063299 B CN 115063299B CN 202210995819 A CN202210995819 A CN 202210995819A CN 115063299 B CN115063299 B CN 115063299B
- Authority
- CN
- China
- Prior art keywords
- image
- processed
- filling
- standard model
- height
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007781 pre-processing Methods 0.000 title claims abstract description 52
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000005477 standard model Effects 0.000 claims abstract description 132
- 238000013507 mapping Methods 0.000 claims abstract description 45
- 239000000126 substance Substances 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 abstract description 11
- 230000006870 function Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 6
- 229920000515 polycarbonate Polymers 0.000 description 6
- 239000004417 polycarbonate Substances 0.000 description 6
- 230000008901 benefit Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 150000002605 large molecules Chemical class 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4023—Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/32—Normalisation of the pattern dimensions
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The present disclosure provides an image preprocessing method, apparatus, electronic device, and storage medium, the method comprising: determining a filling mode according to the width ratio and the height ratio between the standard model and the image to be processed; calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode; zooming the image to be processed based on the zooming size to obtain a zoomed image; and calculating a coordinate mapping relation based on the filling offset, and filling the scaled image in the standard model based on the color values of the R, G and B channels of the image to be processed and the coordinate mapping relation to obtain a preprocessed target image. Therefore, the mapping relation between the image to be processed and the standard model is established, the image to be processed can be input into the model and only traversed once, the zooming and filling of the image can be realized, and the image processing efficiency is improved.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image preprocessing method and apparatus, an electronic device, and a storage medium.
Background
With the rapid development of computer vision, various models based on computer vision are more and more landed, the deployment of the models has extremely high performance requirements on the reasoning rate of the models, and the image preprocessing stage is an important part of the time consumption of the models. In the inference process of the target detection algorithm, the aspect ratio of an input image is usually required to be kept fixed, and the input of part of target detection models is fixed size, so that when the image to be processed is input into a model for detection, the image needs to be filled and scaled.
Generally, a main technology for filling and zooming an image is a function under an Open Source Computer Vision Library (opencv) framework, the existing function under the opencv framework is packaged well, if a user-defined filling and zooming operation is realized, the function provided by the opencv framework needs to be operated step by step, the operation cannot be completed step by step, the operation is not flexible enough, the operation is troublesome and time-consuming, and the efficiency of image preprocessing is greatly reduced.
Disclosure of Invention
The embodiment of the disclosure at least provides an image preprocessing method, an image preprocessing device, electronic equipment and a storage medium. Therefore, the mapping relation between the image to be processed and the standard model is established, the image to be processed can be input into the model and only traversed once, the zooming and filling of the image can be realized, and the image processing efficiency is improved.
The embodiment of the disclosure provides an image preprocessing method, which comprises the following steps:
acquiring an image to be processed, the size of the image to be processed and the size of a standard model which are input by a user; the standard model is a model for preprocessing the image to be processed;
determining a filling mode for filling the image to be processed according to the width ratio and the height ratio between the standard model and the image to be processed; the filling mode comprises an up-down filling mode and a left-right filling mode;
calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; the filling offset is the number of pixel points for filling the image to be processed;
zooming the image to be processed based on the calculated zooming size to obtain a zoomed image;
and calculating a coordinate mapping relation between a coordinate corresponding to a pixel point of the zoomed image and a coordinate corresponding to a pixel point of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on the color value of each channel of the image to be processed and the coordinate mapping relation to obtain a target image subjected to preprocessing.
In an optional implementation manner, the determining, according to the width ratio and the height ratio between the standard model and the image to be processed, a filling manner for image filling of the image to be processed includes:
calculating the width ratio between the standard model and the image to be processed according to the width of the image to be processed and the width of the standard model;
calculating the height ratio between the standard model and the image to be processed according to the height of the image to be processed and the height of the standard model;
judging whether the height ratio is larger than the width ratio or not;
if so, determining the up-down filling mode as a filling mode for image filling aiming at the image to be processed;
and if not, determining the left and right filling modes as the filling modes for filling the image to be processed.
In an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the scaling size and the filling offset for scaling the image to be processed are calculated according to the following formulas:
wherein the content of the first and second substances,the zoomed width of the image to be processed is obtained;is the width of the standard model;is a stand forThe height of the zoomed image to be processed;the width ratio of the standard model to the image to be processed is obtained;the height of the image to be processed is taken as the height of the image to be processed;filling offset of each pixel point of the image to be processed corresponding to the up-and-down filling mode;is the height of the standard model.
In an optional implementation manner, if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the scaling size and the filling offset for scaling the image to be processed are calculated according to the following formulas:
wherein the content of the first and second substances,the zoomed width of the image to be processed is obtained;for the standard model and the to-be-processedThe height ratio of the image;the width of the image to be processed;the height of the image to be processed after zooming is obtained;is the height of the standard model;is the width of the standard model;and filling offset of each pixel point of the image to be processed corresponding to the left and right filling modes.
In an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the coordinate mapping relationship between the scaled image and the image to be processed is calculated according to the following formula:
wherein the content of the first and second substances,the abscissa of the pixel point of the image to be processed is taken as the abscissa;the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;the abscissa of the pixel point of the zoomed image is taken as the coordinate of the pixel point of the zoomed image;the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;and the ratio of the width of the standard model to the width of the image to be processed is obtained.
In an optional implementation manner, if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the coordinate mapping relationship between the scaled image and the image to be processed is calculated according to the following formula:
wherein, the first and the second end of the pipe are connected with each other,the abscissa of the pixel point of the image to be processed is taken as the coordinate;the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;the abscissa of the pixel point of the zoomed image is taken as the coordinate of the pixel point of the zoomed image;the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;and the ratio of the height of the standard model to the height of the image to be processed is obtained.
The embodiment of the present disclosure further provides an image preprocessing apparatus, which includes:
the acquisition module is used for acquiring an image to be processed, the size of the image to be processed and the size of the standard model which are input by a user; the standard model is a model for preprocessing the image to be processed;
a filling mode determining module, configured to determine a filling mode for image filling on the image to be processed according to a width ratio and a height ratio between the standard model and the image to be processed; the filling mode comprises an up-down filling mode and a left-right filling mode;
the calculation module is used for calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; the filling offset is the number of pixel points for filling the image to be processed;
the scaling module is used for scaling the image to be processed based on the calculated scaling size to obtain a scaled image;
and the filling module is used for calculating a coordinate mapping relation between a coordinate corresponding to the pixel point of the zoomed image and a coordinate corresponding to the pixel point of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on the color value of each channel of the image to be processed and the coordinate mapping relation to obtain a target image which is subjected to preprocessing.
In an optional embodiment, the filling manner determining module is specifically configured to:
calculating the width ratio between the standard model and the image to be processed according to the width of the image to be processed and the width of the standard model;
calculating the height ratio between the standard model and the image to be processed according to the height of the image to be processed and the height of the standard model;
judging whether the height ratio is larger than the width ratio or not;
if so, determining the up-down filling mode as a filling mode for filling the image to be processed;
and if not, determining the left and right filling modes as filling modes for filling the image to be processed.
In an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the calculating module is configured to calculate a scaling size and a filling offset for scaling the image to be processed according to the following formulas:
wherein the content of the first and second substances,the zoomed width of the image to be processed is obtained;is the width of the standard model;the height of the image to be processed after zooming is obtained;the width ratio of the standard model to the image to be processed is obtained;is the height of the image to be processed;filling offset of each pixel point of the image to be processed corresponding to the up-and-down filling mode;is the height of the standard model.
In an optional implementation manner, if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the calculating module is configured to calculate a scaling size and a filling offset for scaling the image to be processed according to the following formulas:
wherein the content of the first and second substances,the zoomed width of the image to be processed is obtained;the height ratio of the standard model to the image to be processed is obtained;the width of the image to be processed;the height of the image to be processed after zooming is obtained;is the height of the standard model;is the width of the standard model;and filling offset of each pixel point of the image to be processed corresponding to the left and right filling modes.
In an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the filling module is configured to calculate a coordinate mapping relationship between the scaled image and the image to be processed according to the following formula:
wherein, the first and the second end of the pipe are connected with each other,the abscissa of the pixel point of the image to be processed is taken as the coordinate;the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;the abscissa of the pixel point of the zoomed image is taken as the coordinate of the pixel point of the zoomed image;the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;and the ratio of the width of the standard model to the width of the image to be processed is obtained.
In an optional implementation manner, if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the filling module is configured to calculate a coordinate mapping relationship between the scaled image and the image to be processed according to the following formula:
wherein the content of the first and second substances,the abscissa of the pixel point of the image to be processed is taken as the abscissa;the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;the abscissa of the pixel point of the zoomed image is taken as the coordinate of the pixel point of the zoomed image;the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;and the ratio of the height of the standard model to the height of the image to be processed is obtained.
An embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, the processor and the memory communicate with each other via the bus when the electronic device is running, and the machine-readable instructions are executed by the processor to perform the steps of the above embodiments.
The disclosed embodiments also provide a computer storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the steps in the above embodiments.
The image preprocessing method, the device, the equipment and the storage medium provided by the embodiment of the disclosure adopt the steps of acquiring the image to be processed input by a user, the size of the image to be processed and the size of a standard model; determining a filling mode for filling the image to be processed according to the width ratio and the height ratio between the standard model and the image to be processed; calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; zooming the image to be processed based on the calculated zooming size to obtain a zoomed image; and calculating a coordinate mapping relation between coordinates corresponding to pixel points of the zoomed image and coordinates corresponding to pixel points of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on color values of each channel of the image to be processed and the coordinate mapping relation to obtain a preprocessed target image.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating an image preprocessing method provided by an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a specific method for determining a filling manner for image filling of the image to be processed in an image preprocessing method provided by the embodiment of the present disclosure;
FIG. 3 is a schematic filling diagram illustrating an up-down filling manner provided by an embodiment of the disclosure;
FIG. 4 is a schematic diagram illustrating left and right filling provided by the embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating an image preprocessing apparatus provided by an embodiment of the present disclosure;
fig. 6 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Illustration of the drawings:
500-image preprocessing device, 501-acquisition module, 502-filling mode determination module, 503-calculation module, 504-scaling module, 505-filling module, 600-electronic device, 610-processor, 620-memory, 621-memory, 622-external memory, 630-bus.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of a, B, and C, and may mean including any one or more elements selected from the group consisting of a, B, and C.
Researches show that the main technology for filling and zooming the image is a function under an opencv frame, the functions under the existing opencv frame are packaged well, if the user-defined filling and zooming operation is realized, the function provided by the opencv frame needs to be operated step by step, the operation cannot be completed step by step, the operation is not flexible enough, the operation is troublesome and time-consuming, and the efficiency of image preprocessing is greatly reduced.
Based on the above research, the present disclosure provides an image preprocessing method, apparatus, electronic device, and storage medium, wherein the method includes: acquiring an image to be processed, the size of the image to be processed and the size of a standard model which are input by a user; determining a filling mode for filling the image to be processed according to the width ratio and the height ratio between the standard model and the image to be processed; calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; zooming the image to be processed based on the calculated zooming size to obtain a zoomed image; and calculating a coordinate mapping relation between coordinates corresponding to pixel points of the zoomed image and coordinates corresponding to pixel points of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on the color values of all channels of the image to be processed and the coordinate mapping relation to obtain a target image which is subjected to preprocessing. Therefore, the mapping relation between the image to be processed and the standard model is established, the image to be processed can be input into the model and only traversed once, the zooming and filling of the image can be realized, and the image processing efficiency is improved.
To facilitate understanding of the present embodiment, first, an image preprocessing method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the image preprocessing method provided in the embodiments of the present disclosure is generally a computer device with certain computing power, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a terminal computing device, or a server or other processing device. In some possible implementations, the image pre-processing method may be implemented by a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of an image preprocessing method provided in the embodiment of the present disclosure is shown, where the method includes steps S101 to S105, where:
s101: and acquiring the image to be processed, the size of the image to be processed and the size of the standard model which are input by a user.
Here, before the deep learning algorithm model is trained or processed, the image needs to be preprocessed, so that the image and the model can be ensured to be adaptive to each other.
The image to be processed can be a sample image required by the training model, and can also be an image to be processed of the deep learning model.
And the standard model is a model for preprocessing the image to be processed.
Illustratively, a to-be-processed image a input by a user is acquired, and the size of the to-be-processed image is input: height 1024px, width 768px, size of standard model: the height was 640px, and the width was 640px.
S102: and determining a filling mode for filling the image to be processed according to the width ratio and the height ratio between the standard model and the image to be processed.
After the size of the image to be processed and the size of the standard model are obtained, a filling mode of the image to be processed is determined according to the size of the image to be processed and the size of the standard model, the image to be processed is filled based on the determined filling mode so as to expand the size of the image to be processed to the size of the standard model, and the size mapping relation between the standard model and the image to be processed can be obtained based on the width ratio and the height ratio between the standard model and the image to be processed.
The filling modes comprise an up-down filling mode and a left-right filling mode.
Further, referring to fig. 2, a flowchart of a specific method for determining a filling manner for filling the image to be processed in the image preprocessing method according to the embodiment of the present disclosure is shown, where the method includes steps S1021 to S1025, where:
s1021: calculating the width ratio between the standard model and the image to be processed according to the width of the image to be processed and the width of the standard model;
s1022: calculating the height ratio between the standard model and the image to be processed according to the height of the image to be processed and the height of the standard model;
s1023: judging whether the height ratio is larger than the width ratio or not;
s1024: if so, determining the up-down filling mode as a filling mode for filling the image to be processed;
s1025: and if not, determining the left and right filling modes as the filling modes for filling the image to be processed.
Setting two filling modes in advance, and calculating a width ratio between the width of a standard model and the width of an image to be processed according to the size of the image to be processed and the size of the standard model input by a user; calculating a height ratio between the height of the standard model and the height of the image to be processed; comparing the width ratio and the height ratio by using a comparison function, and if the height ratio is greater than the width ratio, determining an up-down filling mode as shown in fig. 3 as a filling mode for the image to be processed; if the height ratio is smaller than the width ratio, determining a left-right filling mode as shown in fig. 4 as a filling mode for the image to be processed, selecting the filling mode according to the width ratio and the height ratio, and setting different filling modes to fill the image to be processed, so as to improve the filling efficiency of the image to be processed.
Illustratively, a to-be-processed image input by a user is acquired, and the size of the to-be-processed image is input: height 768px, width 1024px, size of standard model: height of 640px, width of 640px, height ratio =640/768; width ratio =640/1024, and height ratio is larger than width ratio, so the top-bottom filling mode is selected.
Illustratively, a to-be-processed image a input by a user is acquired, and the size of the to-be-processed image is input: height 1024px, width 768px, size of standard model: height 640px, width 640px, height ratio =640/1024; width ratio =640/768, the width ratio is greater than the height ratio, so left and right filling is chosen.
S103: and calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed.
Here, after the filling manner of the image to be processed is determined, the scaling size and the filling offset amount of the image to be processed are calculated to scale and fill the image to be processed according to the calculated scaling size and filling offset amount.
And the filling offset is the number of pixel points for filling the image to be processed.
And each determined filling mode has a corresponding mode for calculating the scaling size and the filling offset.
Further, in an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the scaling size and the filling offset for scaling the image to be processed are calculated according to the following formulas:
wherein the content of the first and second substances,the zoomed width of the image to be processed is obtained;is the width of the standard model;the height of the image to be processed after zooming is obtained;the width ratio of the standard model to the image to be processed is obtained;is the height of the image to be processed;filling offset of each pixel point of the image to be processed corresponding to the up-and-down filling mode;is the height of the standard model.
Illustratively, a to-be-processed image input by a user is obtained, and a pixel point a coordinate is (100 ) of the size of the input to-be-processed image: height 768px, width 1024px, size of standard model: height 640px, width 640px, height ratio =640/768; width ratio =640/1024, height ratio is greater than width ratio, so the top-bottom filling mode is selected,the average molecular weight of the polycarbonate was 640px,is a high-frequency signal with the frequency of 480px,is 51200.
In an optional implementation manner, if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the scaling size and the filling offset for scaling the image to be processed are calculated according to the following formulas:
wherein the content of the first and second substances,the zoomed width of the image to be processed is obtained;the height ratio of the standard model to the image to be processed is obtained;the width of the image to be processed;the height of the image to be processed after zooming is obtained;is the height of the standard model;is the width of the standard model;and filling offset of each pixel point of the image to be processed corresponding to the left and right filling modes.
Illustratively, an image to be processed input by a user is obtained, and a pixel point a having a coordinate of (100 ) the size of the input image to be processed: height 1024px, width 768px, size of standard model: height of 640px, width of 640px, height ratio =640/1024; width ratio =640/768, and height ratio is smaller than width ratio, so chooseThe left and the right filling modes are selected,at a value of 533px, and at a value of 533px,the average molecular weight of the polycarbonate was 640px,is 53.
S104: and zooming the image to be processed based on the calculated zooming size to obtain a zoomed image.
Here, when the to-be-processed image is scaled, the position of the to-be-processed image may be at a center position of the standard model, may be at a top position of the standard model, and may be at a bottom position of the standard model.
The zoom image is an image which can include an area needing to be filled after the size of the image to be processed is zoomed.
S105: and calculating a coordinate mapping relation between a coordinate corresponding to a pixel point of the zoomed image and a coordinate corresponding to a pixel point of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on the color value of each channel of the image to be processed and the coordinate mapping relation to obtain a target image subjected to preprocessing.
Here, there are different calculation methods for calculating the coordinate mapping relationship for different filling methods.
In an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the coordinate mapping relationship between the scaled image and the image to be processed is calculated according to the following formula:
wherein, the first and the second end of the pipe are connected with each other,the abscissa of the pixel point of the image to be processed is taken as the coordinate;the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;the abscissa of the pixel point of the zoomed image is taken as the coordinate of the pixel point of the zoomed image;the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;and the ratio of the width of the standard model to the width of the image to be processed is obtained. Illustratively, a to-be-processed image input by a user is obtained, and a pixel point a coordinate is (100 ) of the size of the input to-be-processed image: height 768px, width 1024px, size of standard model: height 640px, width 640px, height ratio =640/768; width ratio =640/1024, height ratio is greater than width ratio, so the top-bottom filling mode is chosen,the average molecular weight of the polycarbonate was 640px,is the high-molecular-weight compound with the molecular weight of 480px,to 51200, a pixel point C in the scaled image is calculated, the pixel point C is a mapping point of the point a in the scaled image, and the y-axis coordinate of the point C is C _ y = (100/1024/640)) + (640-768/1024/640)/2 =63+80=143, the x-axis coordinate of the C point is C _ x = (100/1024/640) =63, and the coordinate of C is (63, 143).
In an optional embodiment, if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the coordinate mapping relationship between the scaled image and the image to be processed is calculated according to the following formula:
wherein the content of the first and second substances,the abscissa of the pixel point of the image to be processed is taken as the abscissa;the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;the abscissa of the pixel point of the zoomed image is taken as the abscissa;the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;and the ratio of the height of the standard model to the height of the image to be processed is obtained.
Illustratively, a to-be-processed image input by a user is obtained, and a pixel point a coordinate is (100 ) of the size of the input to-be-processed image: height 1024px, width 768px, size of standard model: height of 640px, width of 640px, height ratio =640/1024; width ratio =640/768, height ratioLess than the width ratio, so left and right filling is selected,at a value of 533px, and at a value of 533px,the average molecular weight of the polycarbonate was 640px,to 53, pixel values of R, G, B channels of pixel point C:;GBto calculate a pixel point B in the scaled image, the pixel point B is a mapping point of the point a in the scaled image, the y-axis coordinate of the point B is B _ y = (100/1024/640) =63, the x-axis coordinate of the point B is B _ x = (100/1024/640) + (640-768/1024/640)/2 =63+80=143, and the coordinate of the point B is (143, 63).
Further, in an optional implementation manner, the color values of the channels R, G, and B of the image to be processed are calculated according to the following calculation formula:
wherein the content of the first and second substances,is the color value of the R channel of the image to be processed, G is the color value of the G channel of the image to be processed, B is the color value of the B channel of the image to be processed,in order to input an image to be processed,is the linear coordinate of the image to be processed on the R channel,for the linear coordinates of the image to be processed in the G channel,and the linear coordinate of the image to be processed in the B channel is obtained.
In an alternative embodiment, the filling of the scaled image in the standard model is calculated according to the following calculation:
wherein the content of the first and second substances,in order to output the scaled image,corresponding each pixel point of the image to be processed to the linear coordinate of each pixel point in the R channel in the zoomed image,corresponding each pixel point of the image to be processed to the linear coordinate of each pixel point in the scaling image in the G channel,and corresponding each pixel point of the image to be processed to the linear coordinate of each pixel point in the B channel in the zoomed image.
Illustratively, a to-be-processed image input by a user is obtained, and a pixel point a coordinate is (100 ) of the size of the input to-be-processed image: height 768px, width 1024px, size of standard model: height of 640px, width of 640px, height ratio =640/768; width ratio =640/1024, height ratio is greater than width ratio, so the top-bottom filling mode is selected,the average molecular weight of the polycarbonate was 640px,is the high-molecular-weight compound with the molecular weight of 480px,51200, the linear coordinates of each channel of R, G, B of the pixel point a in the image to be processed are: r _ A =100 +1024+ 100; g _ A =100 +1024+ 100+1024 768; b _ A =100 +1024+ 768+ 2; pixel values of channels R, G and B of the pixel point C are as follows:;GBcalculating a pixel point C in the zoomed image, wherein the pixel point C is a mapping point of the point A in the zoomed image, the y-axis coordinate of the point C is C _ y = (100/1024/640) + (640-768/1024/640)/2 =63+80=143, the x-axis coordinate of the point C is C _ x = (100/1024/640) =63, the coordinate of the point C is (63, 143), and the linear coordinate R _ C =143 × 640+63 or 51200+63 × 640+63 or 80 × 640+63 + 640+63 of the R channel C; g channel C dotted linear coordinates G _ C =640 +143 + 640+63; channel B C dotted linear coordinates B _ C =640 x 2+143 x 640+63; assigning the pixel value of the pixel point A to the pixel point C:。
illustratively, an image to be processed input by a user is obtained, and a pixel point a having a coordinate of (100 ) the size of the input image to be processed: height 1024px, width 768px, size of standard model: height 640px, width 640px, height ratio =640/1024; width ratio =640/768, and the height ratio is smaller than the width ratio, so left and right filling is chosen,at a value of 533px, and at a value of 533px,the average molecular weight of the polycarbonate was 640px,and 53, the linear coordinates of each channel of R, G, B of the pixel point a in the image to be processed are: r _ A =100 + 768+1024; g _ A =100 + 768+1024; b _ a =100 × 768+1024+ 768+ 2; pixel values of channels R, G and B of the pixel point C are as follows:;GBcalculating a pixel point C in the zoomed image, wherein the pixel point C is a mapping point of the point A in the zoomed image, the y-axis coordinate of the point C is C _ y = (100/1024/640) =63, the x-axis coordinate of the point C is C _ x = (100/1024/640) + (640-768/1024/640)/2 =63+80=143, the coordinate of the point C is (143, 63), and the linear coordinate of the point R _ B =63+ 640+143 of the R channel C; g channel C dotted linear coordinate G _ C =640 +63 + 640+143; channel B C dotted linear coordinates B _ C =640 x 2+63 x 640+143; assigning the pixel value of the pixel point A to the pixel point C:。
calculating a coordinate mapping relation between coordinates corresponding to pixel points of the zoomed image and coordinates corresponding to pixel points of the image to be processed according to the filling offset, filling the zoomed image in the standard model based on color values of each channel of the image to be processed and the coordinate mapping relation, obtaining the image preprocessing method disclosed by the embodiment of the preprocessed target image, and obtaining the image to be processed, the size of the image to be processed and the size of the standard model input by a user; determining a filling mode for filling the image to be processed according to the width ratio and the height ratio between the standard model and the image to be processed; calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; zooming the image to be processed based on the calculated zooming size to obtain a zoomed image; and calculating a coordinate mapping relation between a coordinate corresponding to a pixel point of the zoomed image and a coordinate corresponding to a pixel point of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on the color value of each channel of the image to be processed and the coordinate mapping relation to obtain a target image subjected to preprocessing. Therefore, the mapping relation between the image to be processed and the standard model is established, the image to be processed can be input into the model and only traversed once, the zooming and filling of the image can be realized, and the image processing efficiency is improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, an image preprocessing device corresponding to the image preprocessing method is also provided in the embodiments of the present disclosure, and since the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the image preprocessing method described above in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 5, fig. 5 is a schematic diagram of image preprocessing according to an embodiment of the disclosure. As shown in fig. 5, an image preprocessing apparatus 500 provided by an embodiment of the present disclosure includes:
an obtaining module 501, configured to obtain an image to be processed, a size of the image to be processed, and a size of the standard model, which are input by a user; the standard model is a model for preprocessing the image to be processed;
a filling manner determining module 502, configured to determine a filling manner for image filling on the image to be processed according to a width ratio and a height ratio between the standard model and the image to be processed; the filling mode comprises an up-down filling mode and a left-right filling mode;
a calculating module 503, configured to calculate a scaling size and a filling offset for scaling the image to be processed according to the determined filling manner corresponding to the image to be processed; the filling offset is the number of pixel points for filling the image to be processed;
a scaling module 504, configured to scale the image to be processed based on the calculated scaling size to obtain a scaled image;
and a filling module 505, configured to calculate a coordinate mapping relationship between a coordinate corresponding to a pixel point of the zoomed image and a coordinate corresponding to a pixel point of the image to be processed according to the filling offset, and fill the zoomed image in the standard model based on a color value of each channel of the image to be processed and the coordinate mapping relationship, so as to obtain a target image after preprocessing.
In an optional implementation manner, the filling manner determining module 502 is specifically configured to:
calculating the width ratio between the standard model and the image to be processed according to the width of the image to be processed and the width of the standard model;
calculating a height ratio between the standard model and the image to be processed according to the height of the image to be processed and the height of the standard model;
judging whether the height ratio is larger than the width ratio or not;
if so, determining the up-down filling mode as a filling mode for image filling aiming at the image to be processed;
and if not, determining the left and right filling modes as the filling modes for filling the image to be processed.
In an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the calculating module 503 is configured to calculate a scaling size and a filling offset for scaling the image to be processed according to the following formulas:
wherein the content of the first and second substances,the zoomed width of the image to be processed is obtained;is the width of the standard model;the height of the image to be processed after zooming is obtained;the width ratio of the standard model to the image to be processed is obtained;is the height of the image to be processed;filling offset of each pixel point of the image to be processed corresponding to the up-and-down filling mode;is the height of the standard model.
In an optional implementation manner, if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the calculating module 503 is configured to calculate a scaling size and a filling offset for scaling the image to be processed according to the following formulas:
wherein the content of the first and second substances,the zoomed width of the image to be processed is obtained;the height ratio of the standard model to the image to be processed is obtained;the width of the image to be processed;the height of the image to be processed after zooming is obtained;is the height of the standard model;is the width of the standard model;and filling offset of each pixel point of the image to be processed corresponding to the left and right filling modes.
In an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the filling module is configured to calculate a coordinate mapping relationship between the scaled image and the image to be processed according to the following formula:
wherein, the first and the second end of the pipe are connected with each other,the abscissa of the pixel point of the image to be processed is taken as the abscissa;the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;the abscissa of the pixel point of the zoomed image is taken as the coordinate of the pixel point of the zoomed image;the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;and the ratio of the width of the standard model to the width of the image to be processed is obtained.
In an optional implementation manner, the filling module 505 is configured to calculate a coordinate mapping relationship between the scaled image and the image to be processed according to the following formula if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner:
wherein the content of the first and second substances,the abscissa of the pixel point of the image to be processed is taken as the coordinate;is the graph to be processedThe vertical coordinate of the pixel point of the image;the abscissa of the pixel point of the zoomed image is taken as the coordinate of the pixel point of the zoomed image;the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;and the ratio of the height of the standard model to the height of the image to be processed is obtained.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
The image preprocessing device disclosed by the embodiment of the disclosure comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image to be processed, the size of the image to be processed and the size of a standard model which are input by a user; the standard model is a model for preprocessing the image to be processed; a filling mode determining module, configured to determine a filling mode for image filling on the image to be processed according to a width ratio and a height ratio between the standard model and the image to be processed; wherein the filling modes comprise an up-down filling mode and a left-right filling mode; the calculation module is used for calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; the filling offset is the number of pixel points for filling the image to be processed; the zooming module is used for zooming the image to be processed based on the calculated zooming size to obtain a zoomed image; and the filling module is used for calculating a coordinate mapping relation between a coordinate corresponding to the pixel point of the zoomed image and a coordinate corresponding to the pixel point of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on the color value of each channel of the image to be processed and the coordinate mapping relation to obtain a target image which is subjected to preprocessing. Therefore, the mapping relation between the image to be processed and the standard model is established, the image to be processed can be input into the model and only traversed once, the zooming and filling of the image can be realized, and the image processing efficiency is improved.
Based on the same technical concept, the embodiment of the application also provides the electronic equipment. An embodiment of the present disclosure further provides an electronic device 600, as shown in fig. 6, which is a schematic structural diagram of the electronic device 600 provided in the embodiment of the present disclosure, and includes:
a processor 610, a memory 620, and a bus 630; the storage 620 is used for storing execution instructions and includes a memory 621 and an external storage 622; the memory 621 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 610 and data exchanged with an external memory 622 such as a hard disk, the processor 610 exchanges data with the external memory 622 through the memory 621, and when the electronic device 600 operates, the processor 610 and the memory 620 communicate through a bus 630, so that the processor 610 can execute the steps of the image preprocessing method shown in the above method embodiments.
The embodiments of the present disclosure also provide a computer storage medium, where a computer program is stored on the computer storage medium, and when the computer program is executed by a processor, the steps of the image preprocessing method described in the above method embodiments are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product bears a program code, and instructions included in the program code may be used to execute the steps of the image preprocessing method described in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the electronic device, the storage medium and the apparatus described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed electronic device, storage medium, apparatus, and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units into only one type of logical function may be implemented in other ways, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used to illustrate the technical solutions of the present disclosure, but not to limit the technical solutions, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: those skilled in the art can still make modifications or changes to the embodiments described in the foregoing embodiments, or make equivalent substitutions for some of the technical features, within the technical scope of the disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (8)
1. A method of image pre-processing, the method comprising:
acquiring an image to be processed, the size of the image to be processed and the size of a standard model which are input by a user; the standard model is a model for preprocessing the image to be processed;
determining a filling mode for filling the image to be processed according to the width ratio and the height ratio between the standard model and the image to be processed; the filling mode comprises an up-down filling mode and a left-right filling mode;
calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; the filling offset is the number of pixel points for filling the image to be processed;
zooming the image to be processed based on the calculated zooming size to obtain a zoomed image;
calculating a coordinate mapping relation between coordinates corresponding to pixel points of the zoomed image and coordinates corresponding to pixel points of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on color values of all channels of the image to be processed and the coordinate mapping relation to obtain a target image which is subjected to preprocessing;
the determining a filling mode for image filling of the image to be processed according to the width ratio and the height ratio between the standard model and the image to be processed includes:
calculating the width ratio between the standard model and the image to be processed according to the width of the image to be processed and the width of the standard model;
calculating a height ratio between the standard model and the image to be processed according to the height of the image to be processed and the height of the standard model;
judging whether the height ratio is larger than the width ratio or not;
if so, determining the up-down filling mode as a filling mode for image filling aiming at the image to be processed;
and if not, determining the left and right filling modes as filling modes for filling the image to be processed.
2. The method according to claim 1, wherein if it is determined that the filling manner for image filling of the image to be processed is a vertical filling manner, the scaling size and the filling offset for scaling the image to be processed are calculated according to the following formulas:
wherein the content of the first and second substances,the zoomed width of the image to be processed is obtained;is the width of the standard model;the height of the image to be processed after zooming is obtained;the width ratio of the standard model to the image to be processed is obtained;is the height of the image to be processed;filling offset of each pixel point of the image to be processed corresponding to the up-and-down filling mode;is the height of the standard model.
3. The method according to claim 1, wherein if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the scaling size and the filling offset for scaling the image to be processed are calculated according to the following formulas:
wherein the content of the first and second substances,the zoomed width of the image to be processed is obtained;the height ratio of the standard model to the image to be processed is obtained;the width of the image to be processed;the height of the image to be processed after zooming is obtained;is the height of the standard model;is the width of the standard model;and filling offset of each pixel point of the image to be processed corresponding to the left and right filling modes.
4. The method according to claim 1, wherein if it is determined that the filling manner for image filling of the image to be processed is a vertical filling manner, calculating a coordinate mapping relationship between the scaled image and the image to be processed according to the following formula:
wherein, the first and the second end of the pipe are connected with each other,the abscissa of the pixel point of the image to be processed is taken as the abscissa;the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;the abscissa of the pixel point of the zoomed image is taken as the abscissa;the vertical coordinate of the pixel point of the zoomed image is taken;and the ratio of the width of the standard model to the width of the image to be processed is obtained.
5. The method according to claim 1, wherein if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, calculating a coordinate mapping relationship between the scaled image and the image to be processed according to the following formula:
wherein the content of the first and second substances,the abscissa of the pixel point of the image to be processed is taken as the coordinate;the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;the abscissa of the pixel point of the zoomed image is taken as the abscissa;the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;and the ratio of the height of the standard model to the height of the image to be processed is obtained.
6. An image preprocessing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring an image to be processed, the size of the image to be processed and the size of the standard model which are input by a user; the standard model is a model for preprocessing the image to be processed;
a filling mode determining module, configured to determine a filling mode for image filling on the image to be processed according to a width ratio and a height ratio between the standard model and the image to be processed; the filling mode comprises an up-down filling mode and a left-right filling mode;
the calculation module is used for calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; the filling offset is the number of pixel points for filling the image to be processed;
the zooming module is used for zooming the image to be processed based on the calculated zooming size to obtain a zoomed image;
the filling module is used for calculating a coordinate mapping relation between a coordinate corresponding to a pixel point of the zoomed image and a coordinate corresponding to a pixel point of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on a color value of each channel of the image to be processed and the coordinate mapping relation to obtain a target image which is subjected to preprocessing;
the filling mode determining module is specifically configured to:
calculating the width ratio between the standard model and the image to be processed according to the width of the image to be processed and the width of the standard model;
calculating a height ratio between the standard model and the image to be processed according to the height of the image to be processed and the height of the standard model;
judging whether the height ratio is larger than the width ratio or not;
if so, determining the up-down filling mode as a filling mode for image filling aiming at the image to be processed;
and if not, determining the left and right filling modes as filling modes for filling the image to be processed.
7. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions, when executed by the processor, performing the steps of the image pre-processing method according to any one of claims 1 to 5.
8. A computer storage medium, characterized in that the computer storage medium has stored thereon a computer program which, when being executed by a processor, performs the steps of the image pre-processing method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210995819.0A CN115063299B (en) | 2022-08-19 | 2022-08-19 | Image preprocessing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210995819.0A CN115063299B (en) | 2022-08-19 | 2022-08-19 | Image preprocessing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115063299A CN115063299A (en) | 2022-09-16 |
CN115063299B true CN115063299B (en) | 2022-11-18 |
Family
ID=83207979
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210995819.0A Active CN115063299B (en) | 2022-08-19 | 2022-08-19 | Image preprocessing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115063299B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112613570A (en) * | 2020-12-29 | 2021-04-06 | 深圳云天励飞技术股份有限公司 | Image detection method, image detection device, equipment and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5389879B2 (en) * | 2011-09-20 | 2014-01-15 | 株式会社日立製作所 | Imaging apparatus, surveillance camera, and camera screen masking method |
CN111292245A (en) * | 2018-12-07 | 2020-06-16 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN109934773B (en) * | 2019-03-13 | 2023-08-25 | 北京旷视科技有限公司 | Image processing method, device, electronic equipment and computer readable medium |
CN111402228B (en) * | 2020-03-13 | 2021-05-07 | 腾讯科技(深圳)有限公司 | Image detection method, device and computer readable storage medium |
CN112215751A (en) * | 2020-10-13 | 2021-01-12 | Oppo广东移动通信有限公司 | Image scaling method, image scaling device and terminal equipment |
-
2022
- 2022-08-19 CN CN202210995819.0A patent/CN115063299B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112613570A (en) * | 2020-12-29 | 2021-04-06 | 深圳云天励飞技术股份有限公司 | Image detection method, image detection device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN115063299A (en) | 2022-09-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109961507B (en) | Face image generation method, device, equipment and storage medium | |
CN111192292A (en) | Target tracking method based on attention mechanism and twin network and related equipment | |
CN109754359B (en) | Pooling processing method and system applied to convolutional neural network | |
CN108875931B (en) | Neural network training and image processing method, device and system | |
CN116188805B (en) | Image content analysis method and device for massive images and image information network | |
KR20220051162A (en) | Visual positioning methods, training methods for related models, and related devices and devices | |
CN108124489B (en) | Information processing method, apparatus, cloud processing device and computer program product | |
CN109977952B (en) | Candidate target detection method based on local maximum | |
CN107871321B (en) | Image segmentation method and device | |
CN109859314B (en) | Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and storage medium | |
CN111008631B (en) | Image association method and device, storage medium and electronic device | |
CN113112542A (en) | Visual positioning method and device, electronic equipment and storage medium | |
CN112802081A (en) | Depth detection method and device, electronic equipment and storage medium | |
CN109377552B (en) | Image occlusion calculating method, device, calculating equipment and storage medium | |
CN113516697A (en) | Image registration method and device, electronic equipment and computer-readable storage medium | |
CN116188917B (en) | Defect data generation model training method, defect data generation method and device | |
CN115063299B (en) | Image preprocessing method and device, electronic equipment and storage medium | |
US20230260211A1 (en) | Three-Dimensional Point Cloud Generation Method, Apparatus and Electronic Device | |
CN112598611A (en) | Method and device for synthesizing and identifying embossed bank card number image | |
CN109461198B (en) | Grid model processing method and device | |
CN111814884A (en) | Target detection network model upgrading method based on deformable convolution | |
CN111860054A (en) | Convolutional network training method and device | |
JP2005339535A (en) | Calculation of dissimilarity measure | |
CN108520259A (en) | A kind of extracting method of foreground target, device, equipment and storage medium | |
CN112288748B (en) | Semantic segmentation network training and image semantic segmentation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address | ||
CP03 | Change of name, title or address |
Address after: Room 711C, 7th Floor, Building A, Building 1, Yard 19, Ronghua Middle Road, Beijing Economic and Technological Development Zone, Beijing 100176 Patentee after: Beijing Zhongke Flux Technology Co.,Ltd. Address before: Room 711C, 7th Floor, Building A, Building 1, Yard 19, Ronghua Middle Road, Beijing Economic and Technological Development Zone, Beijing 100176 Patentee before: Beijing Ruixin high throughput technology Co.,Ltd. |