CN115063299B - Image preprocessing method and device, electronic equipment and storage medium - Google Patents

Image preprocessing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115063299B
CN115063299B CN202210995819.0A CN202210995819A CN115063299B CN 115063299 B CN115063299 B CN 115063299B CN 202210995819 A CN202210995819 A CN 202210995819A CN 115063299 B CN115063299 B CN 115063299B
Authority
CN
China
Prior art keywords
image
processed
filling
standard model
height
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210995819.0A
Other languages
Chinese (zh)
Other versions
CN115063299A (en
Inventor
杨耀宗
张晓辰
罗鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongke Flux Technology Co ltd
Original Assignee
Beijing Ruixin High Throughput Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ruixin High Throughput Technology Co ltd filed Critical Beijing Ruixin High Throughput Technology Co ltd
Priority to CN202210995819.0A priority Critical patent/CN115063299B/en
Publication of CN115063299A publication Critical patent/CN115063299A/en
Application granted granted Critical
Publication of CN115063299B publication Critical patent/CN115063299B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/32Normalisation of the pattern dimensions

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure provides an image preprocessing method, apparatus, electronic device, and storage medium, the method comprising: determining a filling mode according to the width ratio and the height ratio between the standard model and the image to be processed; calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode; zooming the image to be processed based on the zooming size to obtain a zoomed image; and calculating a coordinate mapping relation based on the filling offset, and filling the scaled image in the standard model based on the color values of the R, G and B channels of the image to be processed and the coordinate mapping relation to obtain a preprocessed target image. Therefore, the mapping relation between the image to be processed and the standard model is established, the image to be processed can be input into the model and only traversed once, the zooming and filling of the image can be realized, and the image processing efficiency is improved.

Description

Image preprocessing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image preprocessing method and apparatus, an electronic device, and a storage medium.
Background
With the rapid development of computer vision, various models based on computer vision are more and more landed, the deployment of the models has extremely high performance requirements on the reasoning rate of the models, and the image preprocessing stage is an important part of the time consumption of the models. In the inference process of the target detection algorithm, the aspect ratio of an input image is usually required to be kept fixed, and the input of part of target detection models is fixed size, so that when the image to be processed is input into a model for detection, the image needs to be filled and scaled.
Generally, a main technology for filling and zooming an image is a function under an Open Source Computer Vision Library (opencv) framework, the existing function under the opencv framework is packaged well, if a user-defined filling and zooming operation is realized, the function provided by the opencv framework needs to be operated step by step, the operation cannot be completed step by step, the operation is not flexible enough, the operation is troublesome and time-consuming, and the efficiency of image preprocessing is greatly reduced.
Disclosure of Invention
The embodiment of the disclosure at least provides an image preprocessing method, an image preprocessing device, electronic equipment and a storage medium. Therefore, the mapping relation between the image to be processed and the standard model is established, the image to be processed can be input into the model and only traversed once, the zooming and filling of the image can be realized, and the image processing efficiency is improved.
The embodiment of the disclosure provides an image preprocessing method, which comprises the following steps:
acquiring an image to be processed, the size of the image to be processed and the size of a standard model which are input by a user; the standard model is a model for preprocessing the image to be processed;
determining a filling mode for filling the image to be processed according to the width ratio and the height ratio between the standard model and the image to be processed; the filling mode comprises an up-down filling mode and a left-right filling mode;
calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; the filling offset is the number of pixel points for filling the image to be processed;
zooming the image to be processed based on the calculated zooming size to obtain a zoomed image;
and calculating a coordinate mapping relation between a coordinate corresponding to a pixel point of the zoomed image and a coordinate corresponding to a pixel point of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on the color value of each channel of the image to be processed and the coordinate mapping relation to obtain a target image subjected to preprocessing.
In an optional implementation manner, the determining, according to the width ratio and the height ratio between the standard model and the image to be processed, a filling manner for image filling of the image to be processed includes:
calculating the width ratio between the standard model and the image to be processed according to the width of the image to be processed and the width of the standard model;
calculating the height ratio between the standard model and the image to be processed according to the height of the image to be processed and the height of the standard model;
judging whether the height ratio is larger than the width ratio or not;
if so, determining the up-down filling mode as a filling mode for image filling aiming at the image to be processed;
and if not, determining the left and right filling modes as the filling modes for filling the image to be processed.
In an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the scaling size and the filling offset for scaling the image to be processed are calculated according to the following formulas:
Figure M_220817095140432_432999001
Figure M_220817095140479_479868001
Figure M_220817095140511_511094001
wherein the content of the first and second substances,
Figure M_220817095140543_543307001
the zoomed width of the image to be processed is obtained;
Figure M_220817095140559_559452002
is the width of the standard model;
Figure M_220817095140575_575074003
is a stand forThe height of the zoomed image to be processed;
Figure M_220817095140606_606325004
the width ratio of the standard model to the image to be processed is obtained;
Figure M_220817095140621_621968005
the height of the image to be processed is taken as the height of the image to be processed;
Figure M_220817095140637_637572006
filling offset of each pixel point of the image to be processed corresponding to the up-and-down filling mode;
Figure M_220817095140668_668825007
is the height of the standard model.
In an optional implementation manner, if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the scaling size and the filling offset for scaling the image to be processed are calculated according to the following formulas:
Figure M_220817095140684_684436001
Figure M_220817095140715_715699001
Figure M_220817095140737_737614001
wherein the content of the first and second substances,
Figure M_220817095140768_768925001
the zoomed width of the image to be processed is obtained;
Figure M_220817095140785_785033002
for the standard model and the to-be-processedThe height ratio of the image;
Figure M_220817095140816_816273003
the width of the image to be processed;
Figure M_220817095140831_831916004
the height of the image to be processed after zooming is obtained;
Figure M_220817095140863_863176005
is the height of the standard model;
Figure M_220817095140878_878804006
is the width of the standard model;
Figure M_220817095140909_909559007
and filling offset of each pixel point of the image to be processed corresponding to the left and right filling modes.
In an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the coordinate mapping relationship between the scaled image and the image to be processed is calculated according to the following formula:
Figure M_220817095140953_953941001
Figure M_220817095140985_985719001
wherein the content of the first and second substances,
Figure M_220817095141016_016955001
the abscissa of the pixel point of the image to be processed is taken as the abscissa;
Figure M_220817095141048_048221002
the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;
Figure M_220817095141063_063838003
the abscissa of the pixel point of the zoomed image is taken as the coordinate of the pixel point of the zoomed image;
Figure M_220817095141095_095091004
the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;
Figure M_220817095141126_126387005
and the ratio of the width of the standard model to the width of the image to be processed is obtained.
In an optional implementation manner, if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the coordinate mapping relationship between the scaled image and the image to be processed is calculated according to the following formula:
Figure M_220817095141222_222042001
Figure M_220817095141268_268922001
wherein, the first and the second end of the pipe are connected with each other,
Figure M_220817095141300_300160001
the abscissa of the pixel point of the image to be processed is taken as the coordinate;
Figure M_220817095141315_315785002
the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;
Figure M_220817095141332_332850003
the abscissa of the pixel point of the zoomed image is taken as the coordinate of the pixel point of the zoomed image;
Figure M_220817095141349_349004004
the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;
Figure M_220817095141380_380265005
and the ratio of the height of the standard model to the height of the image to be processed is obtained.
The embodiment of the present disclosure further provides an image preprocessing apparatus, which includes:
the acquisition module is used for acquiring an image to be processed, the size of the image to be processed and the size of the standard model which are input by a user; the standard model is a model for preprocessing the image to be processed;
a filling mode determining module, configured to determine a filling mode for image filling on the image to be processed according to a width ratio and a height ratio between the standard model and the image to be processed; the filling mode comprises an up-down filling mode and a left-right filling mode;
the calculation module is used for calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; the filling offset is the number of pixel points for filling the image to be processed;
the scaling module is used for scaling the image to be processed based on the calculated scaling size to obtain a scaled image;
and the filling module is used for calculating a coordinate mapping relation between a coordinate corresponding to the pixel point of the zoomed image and a coordinate corresponding to the pixel point of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on the color value of each channel of the image to be processed and the coordinate mapping relation to obtain a target image which is subjected to preprocessing.
In an optional embodiment, the filling manner determining module is specifically configured to:
calculating the width ratio between the standard model and the image to be processed according to the width of the image to be processed and the width of the standard model;
calculating the height ratio between the standard model and the image to be processed according to the height of the image to be processed and the height of the standard model;
judging whether the height ratio is larger than the width ratio or not;
if so, determining the up-down filling mode as a filling mode for filling the image to be processed;
and if not, determining the left and right filling modes as filling modes for filling the image to be processed.
In an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the calculating module is configured to calculate a scaling size and a filling offset for scaling the image to be processed according to the following formulas:
Figure M_220817095141395_395863001
Figure M_220817095141427_427114001
Figure M_220817095141442_442737001
wherein the content of the first and second substances,
Figure M_220817095141474_474015001
the zoomed width of the image to be processed is obtained;
Figure M_220817095141505_505244002
is the width of the standard model;
Figure M_220817095141520_520865003
the height of the image to be processed after zooming is obtained;
Figure M_220817095141557_557997004
the width ratio of the standard model to the image to be processed is obtained;
Figure M_220817095141573_573604005
is the height of the image to be processed;
Figure M_220817095141604_604861006
filling offset of each pixel point of the image to be processed corresponding to the up-and-down filling mode;
Figure M_220817095141620_620533007
is the height of the standard model.
In an optional implementation manner, if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the calculating module is configured to calculate a scaling size and a filling offset for scaling the image to be processed according to the following formulas:
Figure M_220817095141667_667370001
Figure M_220817095141737_737142001
Figure M_220817095141784_784529001
wherein the content of the first and second substances,
Figure M_220817095141800_800160001
the zoomed width of the image to be processed is obtained;
Figure M_220817095141831_831443002
the height ratio of the standard model to the image to be processed is obtained;
Figure M_220817095141862_862664003
the width of the image to be processed;
Figure M_220817095141878_878304004
the height of the image to be processed after zooming is obtained;
Figure M_220817095141909_909541005
is the height of the standard model;
Figure M_220817095141925_925179006
is the width of the standard model;
Figure M_220817095141943_943186007
and filling offset of each pixel point of the image to be processed corresponding to the left and right filling modes.
In an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the filling module is configured to calculate a coordinate mapping relationship between the scaled image and the image to be processed according to the following formula:
Figure M_220817095141974_974498001
Figure M_220817095141990_990122001
wherein, the first and the second end of the pipe are connected with each other,
Figure M_220817095142021_021836001
the abscissa of the pixel point of the image to be processed is taken as the coordinate;
Figure M_220817095142037_037477002
the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;
Figure M_220817095142068_068723003
the abscissa of the pixel point of the zoomed image is taken as the coordinate of the pixel point of the zoomed image;
Figure M_220817095142084_084357004
the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;
Figure M_220817095142099_099972005
and the ratio of the width of the standard model to the width of the image to be processed is obtained.
In an optional implementation manner, if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the filling module is configured to calculate a coordinate mapping relationship between the scaled image and the image to be processed according to the following formula:
Figure M_220817095142132_132657001
Figure M_220817095142148_148814001
wherein the content of the first and second substances,
Figure M_220817095142180_180065001
the abscissa of the pixel point of the image to be processed is taken as the abscissa;
Figure M_220817095142195_195679002
the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;
Figure M_220817095142211_211405003
the abscissa of the pixel point of the zoomed image is taken as the coordinate of the pixel point of the zoomed image;
Figure M_220817095142242_242551004
the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;
Figure M_220817095142258_258187005
and the ratio of the height of the standard model to the height of the image to be processed is obtained.
An embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, the processor and the memory communicate with each other via the bus when the electronic device is running, and the machine-readable instructions are executed by the processor to perform the steps of the above embodiments.
The disclosed embodiments also provide a computer storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the steps in the above embodiments.
The image preprocessing method, the device, the equipment and the storage medium provided by the embodiment of the disclosure adopt the steps of acquiring the image to be processed input by a user, the size of the image to be processed and the size of a standard model; determining a filling mode for filling the image to be processed according to the width ratio and the height ratio between the standard model and the image to be processed; calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; zooming the image to be processed based on the calculated zooming size to obtain a zoomed image; and calculating a coordinate mapping relation between coordinates corresponding to pixel points of the zoomed image and coordinates corresponding to pixel points of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on color values of each channel of the image to be processed and the coordinate mapping relation to obtain a preprocessed target image.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating an image preprocessing method provided by an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a specific method for determining a filling manner for image filling of the image to be processed in an image preprocessing method provided by the embodiment of the present disclosure;
FIG. 3 is a schematic filling diagram illustrating an up-down filling manner provided by an embodiment of the disclosure;
FIG. 4 is a schematic diagram illustrating left and right filling provided by the embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating an image preprocessing apparatus provided by an embodiment of the present disclosure;
fig. 6 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Illustration of the drawings:
500-image preprocessing device, 501-acquisition module, 502-filling mode determination module, 503-calculation module, 504-scaling module, 505-filling module, 600-electronic device, 610-processor, 620-memory, 621-memory, 622-external memory, 630-bus.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of a, B, and C, and may mean including any one or more elements selected from the group consisting of a, B, and C.
Researches show that the main technology for filling and zooming the image is a function under an opencv frame, the functions under the existing opencv frame are packaged well, if the user-defined filling and zooming operation is realized, the function provided by the opencv frame needs to be operated step by step, the operation cannot be completed step by step, the operation is not flexible enough, the operation is troublesome and time-consuming, and the efficiency of image preprocessing is greatly reduced.
Based on the above research, the present disclosure provides an image preprocessing method, apparatus, electronic device, and storage medium, wherein the method includes: acquiring an image to be processed, the size of the image to be processed and the size of a standard model which are input by a user; determining a filling mode for filling the image to be processed according to the width ratio and the height ratio between the standard model and the image to be processed; calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; zooming the image to be processed based on the calculated zooming size to obtain a zoomed image; and calculating a coordinate mapping relation between coordinates corresponding to pixel points of the zoomed image and coordinates corresponding to pixel points of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on the color values of all channels of the image to be processed and the coordinate mapping relation to obtain a target image which is subjected to preprocessing. Therefore, the mapping relation between the image to be processed and the standard model is established, the image to be processed can be input into the model and only traversed once, the zooming and filling of the image can be realized, and the image processing efficiency is improved.
To facilitate understanding of the present embodiment, first, an image preprocessing method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the image preprocessing method provided in the embodiments of the present disclosure is generally a computer device with certain computing power, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a terminal computing device, or a server or other processing device. In some possible implementations, the image pre-processing method may be implemented by a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of an image preprocessing method provided in the embodiment of the present disclosure is shown, where the method includes steps S101 to S105, where:
s101: and acquiring the image to be processed, the size of the image to be processed and the size of the standard model which are input by a user.
Here, before the deep learning algorithm model is trained or processed, the image needs to be preprocessed, so that the image and the model can be ensured to be adaptive to each other.
The image to be processed can be a sample image required by the training model, and can also be an image to be processed of the deep learning model.
And the standard model is a model for preprocessing the image to be processed.
Illustratively, a to-be-processed image a input by a user is acquired, and the size of the to-be-processed image is input: height 1024px, width 768px, size of standard model: the height was 640px, and the width was 640px.
S102: and determining a filling mode for filling the image to be processed according to the width ratio and the height ratio between the standard model and the image to be processed.
After the size of the image to be processed and the size of the standard model are obtained, a filling mode of the image to be processed is determined according to the size of the image to be processed and the size of the standard model, the image to be processed is filled based on the determined filling mode so as to expand the size of the image to be processed to the size of the standard model, and the size mapping relation between the standard model and the image to be processed can be obtained based on the width ratio and the height ratio between the standard model and the image to be processed.
The filling modes comprise an up-down filling mode and a left-right filling mode.
Further, referring to fig. 2, a flowchart of a specific method for determining a filling manner for filling the image to be processed in the image preprocessing method according to the embodiment of the present disclosure is shown, where the method includes steps S1021 to S1025, where:
s1021: calculating the width ratio between the standard model and the image to be processed according to the width of the image to be processed and the width of the standard model;
s1022: calculating the height ratio between the standard model and the image to be processed according to the height of the image to be processed and the height of the standard model;
s1023: judging whether the height ratio is larger than the width ratio or not;
s1024: if so, determining the up-down filling mode as a filling mode for filling the image to be processed;
s1025: and if not, determining the left and right filling modes as the filling modes for filling the image to be processed.
Setting two filling modes in advance, and calculating a width ratio between the width of a standard model and the width of an image to be processed according to the size of the image to be processed and the size of the standard model input by a user; calculating a height ratio between the height of the standard model and the height of the image to be processed; comparing the width ratio and the height ratio by using a comparison function, and if the height ratio is greater than the width ratio, determining an up-down filling mode as shown in fig. 3 as a filling mode for the image to be processed; if the height ratio is smaller than the width ratio, determining a left-right filling mode as shown in fig. 4 as a filling mode for the image to be processed, selecting the filling mode according to the width ratio and the height ratio, and setting different filling modes to fill the image to be processed, so as to improve the filling efficiency of the image to be processed.
Illustratively, a to-be-processed image input by a user is acquired, and the size of the to-be-processed image is input: height 768px, width 1024px, size of standard model: height of 640px, width of 640px, height ratio =640/768; width ratio =640/1024, and height ratio is larger than width ratio, so the top-bottom filling mode is selected.
Illustratively, a to-be-processed image a input by a user is acquired, and the size of the to-be-processed image is input: height 1024px, width 768px, size of standard model: height 640px, width 640px, height ratio =640/1024; width ratio =640/768, the width ratio is greater than the height ratio, so left and right filling is chosen.
S103: and calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed.
Here, after the filling manner of the image to be processed is determined, the scaling size and the filling offset amount of the image to be processed are calculated to scale and fill the image to be processed according to the calculated scaling size and filling offset amount.
And the filling offset is the number of pixel points for filling the image to be processed.
And each determined filling mode has a corresponding mode for calculating the scaling size and the filling offset.
Further, in an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the scaling size and the filling offset for scaling the image to be processed are calculated according to the following formulas:
Figure M_220817095142289_289420001
Figure M_220817095142305_305043001
Figure M_220817095142337_337732001
wherein the content of the first and second substances,
Figure M_220817095142369_369497001
the zoomed width of the image to be processed is obtained;
Figure M_220817095142385_385120002
is the width of the standard model;
Figure M_220817095142416_416388003
the height of the image to be processed after zooming is obtained;
Figure M_220817095142432_432010004
the width ratio of the standard model to the image to be processed is obtained;
Figure M_220817095142463_463250005
is the height of the image to be processed;
Figure M_220817095142478_478872006
filling offset of each pixel point of the image to be processed corresponding to the up-and-down filling mode;
Figure M_220817095142510_510121007
is the height of the standard model.
Illustratively, a to-be-processed image input by a user is obtained, and a pixel point a coordinate is (100 ) of the size of the input to-be-processed image: height 768px, width 1024px, size of standard model: height 640px, width 640px, height ratio =640/768; width ratio =640/1024, height ratio is greater than width ratio, so the top-bottom filling mode is selected,
Figure M_220817095142525_525768001
the average molecular weight of the polycarbonate was 640px,
Figure M_220817095142559_559500002
is a high-frequency signal with the frequency of 480px,
Figure M_220817095142590_590694003
is 51200.
In an optional implementation manner, if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the scaling size and the filling offset for scaling the image to be processed are calculated according to the following formulas:
Figure M_220817095142606_606337001
Figure M_220817095142637_637585001
Figure M_220817095142668_668838001
wherein the content of the first and second substances,
Figure M_220817095142715_715711001
the zoomed width of the image to be processed is obtained;
Figure M_220817095142753_753302002
the height ratio of the standard model to the image to be processed is obtained;
Figure M_220817095142785_785052003
the width of the image to be processed;
Figure M_220817095142815_815804004
the height of the image to be processed after zooming is obtained;
Figure M_220817095142863_863178005
is the height of the standard model;
Figure M_220817095142986_986435006
is the width of the standard model;
Figure M_220817095143245_245492007
and filling offset of each pixel point of the image to be processed corresponding to the left and right filling modes.
Illustratively, an image to be processed input by a user is obtained, and a pixel point a having a coordinate of (100 ) the size of the input image to be processed: height 1024px, width 768px, size of standard model: height of 640px, width of 640px, height ratio =640/1024; width ratio =640/768, and height ratio is smaller than width ratio, so chooseThe left and the right filling modes are selected,
Figure M_220817095143342_342616001
at a value of 533px, and at a value of 533px,
Figure M_220817095143405_405652002
the average molecular weight of the polycarbonate was 640px,
Figure M_220817095143421_421280003
is 53.
S104: and zooming the image to be processed based on the calculated zooming size to obtain a zoomed image.
Here, when the to-be-processed image is scaled, the position of the to-be-processed image may be at a center position of the standard model, may be at a top position of the standard model, and may be at a bottom position of the standard model.
The zoom image is an image which can include an area needing to be filled after the size of the image to be processed is zoomed.
S105: and calculating a coordinate mapping relation between a coordinate corresponding to a pixel point of the zoomed image and a coordinate corresponding to a pixel point of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on the color value of each channel of the image to be processed and the coordinate mapping relation to obtain a target image subjected to preprocessing.
Here, there are different calculation methods for calculating the coordinate mapping relationship for different filling methods.
In an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the coordinate mapping relationship between the scaled image and the image to be processed is calculated according to the following formula:
Figure M_220817095143452_452524001
Figure M_220817095143483_483819001
wherein, the first and the second end of the pipe are connected with each other,
Figure M_220817095143531_531849001
the abscissa of the pixel point of the image to be processed is taken as the coordinate;
Figure M_220817095143548_548707002
the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;
Figure M_220817095143579_579953003
the abscissa of the pixel point of the zoomed image is taken as the coordinate of the pixel point of the zoomed image;
Figure M_220817095143611_611203004
the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;
Figure M_220817095143642_642458005
and the ratio of the width of the standard model to the width of the image to be processed is obtained. Illustratively, a to-be-processed image input by a user is obtained, and a pixel point a coordinate is (100 ) of the size of the input to-be-processed image: height 768px, width 1024px, size of standard model: height 640px, width 640px, height ratio =640/768; width ratio =640/1024, height ratio is greater than width ratio, so the top-bottom filling mode is chosen,
Figure M_220817095143673_673735006
the average molecular weight of the polycarbonate was 640px,
Figure M_220817095143704_704956007
is the high-molecular-weight compound with the molecular weight of 480px,
Figure M_220817095143740_740082008
to 51200, a pixel point C in the scaled image is calculated, the pixel point C is a mapping point of the point a in the scaled image, and the y-axis coordinate of the point C is C _ y = (100/1024/640)) + (640-768/1024/640)/2 =63+80=143, the x-axis coordinate of the C point is C _ x = (100/1024/640) =63, and the coordinate of C is (63, 143).
In an optional embodiment, if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the coordinate mapping relationship between the scaled image and the image to be processed is calculated according to the following formula:
Figure M_220817095143771_771866001
Figure M_220817095143803_803108001
wherein the content of the first and second substances,
Figure M_220817095143834_834341001
the abscissa of the pixel point of the image to be processed is taken as the abscissa;
Figure M_220817095144009_009644002
the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;
Figure M_220817095144056_056520003
the abscissa of the pixel point of the zoomed image is taken as the abscissa;
Figure M_220817095144087_087782004
the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;
Figure M_220817095144119_119010005
and the ratio of the height of the standard model to the height of the image to be processed is obtained.
Illustratively, a to-be-processed image input by a user is obtained, and a pixel point a coordinate is (100 ) of the size of the input to-be-processed image: height 1024px, width 768px, size of standard model: height of 640px, width of 640px, height ratio =640/1024; width ratio =640/768, height ratioLess than the width ratio, so left and right filling is selected,
Figure M_220817095144138_138028001
at a value of 533px, and at a value of 533px,
Figure M_220817095144169_169792002
the average molecular weight of the polycarbonate was 640px,
Figure M_220817095144201_201064003
to 53, pixel values of R, G, B channels of pixel point C:
Figure M_220817095144247_247909004
;G
Figure M_220817095144294_294809005
B
Figure M_220817095144347_347690006
to calculate a pixel point B in the scaled image, the pixel point B is a mapping point of the point a in the scaled image, the y-axis coordinate of the point B is B _ y = (100/1024/640) =63, the x-axis coordinate of the point B is B _ x = (100/1024/640) + (640-768/1024/640)/2 =63+80=143, and the coordinate of the point B is (143, 63).
Further, in an optional implementation manner, the color values of the channels R, G, and B of the image to be processed are calculated according to the following calculation formula:
Figure M_220817095144380_380256001
G
Figure M_220817095144442_442762001
B
Figure M_220817095144489_489629001
wherein the content of the first and second substances,
Figure M_220817095144520_520892001
is the color value of the R channel of the image to be processed, G is the color value of the G channel of the image to be processed, B is the color value of the B channel of the image to be processed,
Figure M_220817095144555_555040002
in order to input an image to be processed,
Figure M_220817095144570_570673003
is the linear coordinate of the image to be processed on the R channel,
Figure M_220817095144617_617602004
for the linear coordinates of the image to be processed in the G channel,
Figure M_220817095144664_664439005
and the linear coordinate of the image to be processed in the B channel is obtained.
In an alternative embodiment, the filling of the scaled image in the standard model is calculated according to the following calculation:
Figure M_220817095144695_695665001
Figure M_220817095144743_743983001
Figure M_220817095144807_807036001
wherein the content of the first and second substances,
Figure M_220817095144853_853897001
in order to output the scaled image,
Figure M_220817095144885_885129002
corresponding each pixel point of the image to be processed to the linear coordinate of each pixel point in the R channel in the zoomed image,
Figure M_220817095145078_078477003
corresponding each pixel point of the image to be processed to the linear coordinate of each pixel point in the scaling image in the G channel,
Figure M_220817095145141_141715004
and corresponding each pixel point of the image to be processed to the linear coordinate of each pixel point in the B channel in the zoomed image.
Illustratively, a to-be-processed image input by a user is obtained, and a pixel point a coordinate is (100 ) of the size of the input to-be-processed image: height 768px, width 1024px, size of standard model: height of 640px, width of 640px, height ratio =640/768; width ratio =640/1024, height ratio is greater than width ratio, so the top-bottom filling mode is selected,
Figure M_220817095145174_174207001
the average molecular weight of the polycarbonate was 640px,
Figure M_220817095145205_205451002
is the high-molecular-weight compound with the molecular weight of 480px,
Figure M_220817095145221_221058003
51200, the linear coordinates of each channel of R, G, B of the pixel point a in the image to be processed are: r _ A =100 +1024+ 100; g _ A =100 +1024+ 100+1024 768; b _ A =100 +1024+ 768+ 2; pixel values of channels R, G and B of the pixel point C are as follows:
Figure M_220817095145252_252323004
;G
Figure M_220817095145314_314833005
B
Figure M_220817095145350_350497006
calculating a pixel point C in the zoomed image, wherein the pixel point C is a mapping point of the point A in the zoomed image, the y-axis coordinate of the point C is C _ y = (100/1024/640) + (640-768/1024/640)/2 =63+80=143, the x-axis coordinate of the point C is C _ x = (100/1024/640) =63, the coordinate of the point C is (63, 143), and the linear coordinate R _ C =143 × 640+63 or 51200+63 × 640+63 or 80 × 640+63 + 640+63 of the R channel C; g channel C dotted linear coordinates G _ C =640 +143 + 640+63; channel B C dotted linear coordinates B _ C =640 x 2+143 x 640+63; assigning the pixel value of the pixel point A to the pixel point C:
Figure M_220817095145382_382210007
illustratively, an image to be processed input by a user is obtained, and a pixel point a having a coordinate of (100 ) the size of the input image to be processed: height 1024px, width 768px, size of standard model: height 640px, width 640px, height ratio =640/1024; width ratio =640/768, and the height ratio is smaller than the width ratio, so left and right filling is chosen,
Figure M_220817095145444_444749001
at a value of 533px, and at a value of 533px,
Figure M_220817095145667_667866002
the average molecular weight of the polycarbonate was 640px,
Figure M_220817095145699_699102003
and 53, the linear coordinates of each channel of R, G, B of the pixel point a in the image to be processed are: r _ A =100 + 768+1024; g _ A =100 + 768+1024; b _ a =100 × 768+1024+ 768+ 2; pixel values of channels R, G and B of the pixel point C are as follows:
Figure M_220817095145736_736166004
;G
Figure M_220817095145753_753291005
B
Figure M_220817095145784_784554006
calculating a pixel point C in the zoomed image, wherein the pixel point C is a mapping point of the point A in the zoomed image, the y-axis coordinate of the point C is C _ y = (100/1024/640) =63, the x-axis coordinate of the point C is C _ x = (100/1024/640) + (640-768/1024/640)/2 =63+80=143, the coordinate of the point C is (143, 63), and the linear coordinate of the point R _ B =63+ 640+143 of the R channel C; g channel C dotted linear coordinate G _ C =640 +63 + 640+143; channel B C dotted linear coordinates B _ C =640 x 2+63 x 640+143; assigning the pixel value of the pixel point A to the pixel point C:
Figure M_220817095145815_815783007
calculating a coordinate mapping relation between coordinates corresponding to pixel points of the zoomed image and coordinates corresponding to pixel points of the image to be processed according to the filling offset, filling the zoomed image in the standard model based on color values of each channel of the image to be processed and the coordinate mapping relation, obtaining the image preprocessing method disclosed by the embodiment of the preprocessed target image, and obtaining the image to be processed, the size of the image to be processed and the size of the standard model input by a user; determining a filling mode for filling the image to be processed according to the width ratio and the height ratio between the standard model and the image to be processed; calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; zooming the image to be processed based on the calculated zooming size to obtain a zoomed image; and calculating a coordinate mapping relation between a coordinate corresponding to a pixel point of the zoomed image and a coordinate corresponding to a pixel point of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on the color value of each channel of the image to be processed and the coordinate mapping relation to obtain a target image subjected to preprocessing. Therefore, the mapping relation between the image to be processed and the standard model is established, the image to be processed can be input into the model and only traversed once, the zooming and filling of the image can be realized, and the image processing efficiency is improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, an image preprocessing device corresponding to the image preprocessing method is also provided in the embodiments of the present disclosure, and since the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the image preprocessing method described above in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 5, fig. 5 is a schematic diagram of image preprocessing according to an embodiment of the disclosure. As shown in fig. 5, an image preprocessing apparatus 500 provided by an embodiment of the present disclosure includes:
an obtaining module 501, configured to obtain an image to be processed, a size of the image to be processed, and a size of the standard model, which are input by a user; the standard model is a model for preprocessing the image to be processed;
a filling manner determining module 502, configured to determine a filling manner for image filling on the image to be processed according to a width ratio and a height ratio between the standard model and the image to be processed; the filling mode comprises an up-down filling mode and a left-right filling mode;
a calculating module 503, configured to calculate a scaling size and a filling offset for scaling the image to be processed according to the determined filling manner corresponding to the image to be processed; the filling offset is the number of pixel points for filling the image to be processed;
a scaling module 504, configured to scale the image to be processed based on the calculated scaling size to obtain a scaled image;
and a filling module 505, configured to calculate a coordinate mapping relationship between a coordinate corresponding to a pixel point of the zoomed image and a coordinate corresponding to a pixel point of the image to be processed according to the filling offset, and fill the zoomed image in the standard model based on a color value of each channel of the image to be processed and the coordinate mapping relationship, so as to obtain a target image after preprocessing.
In an optional implementation manner, the filling manner determining module 502 is specifically configured to:
calculating the width ratio between the standard model and the image to be processed according to the width of the image to be processed and the width of the standard model;
calculating a height ratio between the standard model and the image to be processed according to the height of the image to be processed and the height of the standard model;
judging whether the height ratio is larger than the width ratio or not;
if so, determining the up-down filling mode as a filling mode for image filling aiming at the image to be processed;
and if not, determining the left and right filling modes as the filling modes for filling the image to be processed.
In an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the calculating module 503 is configured to calculate a scaling size and a filling offset for scaling the image to be processed according to the following formulas:
Figure M_220817095145862_862681001
Figure M_220817095145956_956387001
Figure M_220817095145988_988162001
wherein the content of the first and second substances,
Figure M_220817095146066_066282001
the zoomed width of the image to be processed is obtained;
Figure M_220817095146207_207884002
is the width of the standard model;
Figure M_220817095146447_447622003
the height of the image to be processed after zooming is obtained;
Figure M_220817095146478_478887004
the width ratio of the standard model to the image to be processed is obtained;
Figure M_220817095146525_525748005
is the height of the image to be processed;
Figure M_220817095146704_704004006
filling offset of each pixel point of the image to be processed corresponding to the up-and-down filling mode;
Figure M_220817095146743_743005007
is the height of the standard model.
In an optional implementation manner, if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the calculating module 503 is configured to calculate a scaling size and a filling offset for scaling the image to be processed according to the following formulas:
Figure M_220817095146790_790396001
Figure M_220817095146821_821668001
Figure M_220817095146852_852913001
wherein the content of the first and second substances,
Figure M_220817095146884_884156001
the zoomed width of the image to be processed is obtained;
Figure M_220817095146950_950525002
the height ratio of the standard model to the image to be processed is obtained;
Figure M_220817095146997_997935003
the width of the image to be processed;
Figure M_220817095147145_145876004
the height of the image to be processed after zooming is obtained;
Figure M_220817095147177_177617005
is the height of the standard model;
Figure M_220817095147208_208864006
is the width of the standard model;
Figure M_220817095147240_240131007
and filling offset of each pixel point of the image to be processed corresponding to the left and right filling modes.
In an optional implementation manner, if it is determined that the filling manner of image filling for the image to be processed is an up-down filling manner, the filling module is configured to calculate a coordinate mapping relationship between the scaled image and the image to be processed according to the following formula:
Figure M_220817095147271_271353001
Figure M_220817095147302_302626001
wherein, the first and the second end of the pipe are connected with each other,
Figure M_220817095147335_335289001
the abscissa of the pixel point of the image to be processed is taken as the abscissa;
Figure M_220817095147367_367062002
the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;
Figure M_220817095147398_398325003
the abscissa of the pixel point of the zoomed image is taken as the coordinate of the pixel point of the zoomed image;
Figure M_220817095147429_429546004
the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;
Figure M_220817095147460_460819005
and the ratio of the width of the standard model to the width of the image to be processed is obtained.
In an optional implementation manner, the filling module 505 is configured to calculate a coordinate mapping relationship between the scaled image and the image to be processed according to the following formula if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner:
Figure M_220817095147492_492088001
Figure M_220817095147523_523340001
wherein the content of the first and second substances,
Figure M_220817095147557_557005001
the abscissa of the pixel point of the image to be processed is taken as the coordinate;
Figure M_220817095147572_572621002
is the graph to be processedThe vertical coordinate of the pixel point of the image;
Figure M_220817095147603_603885003
the abscissa of the pixel point of the zoomed image is taken as the coordinate of the pixel point of the zoomed image;
Figure M_220817095147635_635128004
the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;
Figure M_220817095147666_666384005
and the ratio of the height of the standard model to the height of the image to be processed is obtained.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
The image preprocessing device disclosed by the embodiment of the disclosure comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image to be processed, the size of the image to be processed and the size of a standard model which are input by a user; the standard model is a model for preprocessing the image to be processed; a filling mode determining module, configured to determine a filling mode for image filling on the image to be processed according to a width ratio and a height ratio between the standard model and the image to be processed; wherein the filling modes comprise an up-down filling mode and a left-right filling mode; the calculation module is used for calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; the filling offset is the number of pixel points for filling the image to be processed; the zooming module is used for zooming the image to be processed based on the calculated zooming size to obtain a zoomed image; and the filling module is used for calculating a coordinate mapping relation between a coordinate corresponding to the pixel point of the zoomed image and a coordinate corresponding to the pixel point of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on the color value of each channel of the image to be processed and the coordinate mapping relation to obtain a target image which is subjected to preprocessing. Therefore, the mapping relation between the image to be processed and the standard model is established, the image to be processed can be input into the model and only traversed once, the zooming and filling of the image can be realized, and the image processing efficiency is improved.
Based on the same technical concept, the embodiment of the application also provides the electronic equipment. An embodiment of the present disclosure further provides an electronic device 600, as shown in fig. 6, which is a schematic structural diagram of the electronic device 600 provided in the embodiment of the present disclosure, and includes:
a processor 610, a memory 620, and a bus 630; the storage 620 is used for storing execution instructions and includes a memory 621 and an external storage 622; the memory 621 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 610 and data exchanged with an external memory 622 such as a hard disk, the processor 610 exchanges data with the external memory 622 through the memory 621, and when the electronic device 600 operates, the processor 610 and the memory 620 communicate through a bus 630, so that the processor 610 can execute the steps of the image preprocessing method shown in the above method embodiments.
The embodiments of the present disclosure also provide a computer storage medium, where a computer program is stored on the computer storage medium, and when the computer program is executed by a processor, the steps of the image preprocessing method described in the above method embodiments are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product bears a program code, and instructions included in the program code may be used to execute the steps of the image preprocessing method described in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the electronic device, the storage medium and the apparatus described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed electronic device, storage medium, apparatus, and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units into only one type of logical function may be implemented in other ways, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used to illustrate the technical solutions of the present disclosure, but not to limit the technical solutions, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: those skilled in the art can still make modifications or changes to the embodiments described in the foregoing embodiments, or make equivalent substitutions for some of the technical features, within the technical scope of the disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (8)

1. A method of image pre-processing, the method comprising:
acquiring an image to be processed, the size of the image to be processed and the size of a standard model which are input by a user; the standard model is a model for preprocessing the image to be processed;
determining a filling mode for filling the image to be processed according to the width ratio and the height ratio between the standard model and the image to be processed; the filling mode comprises an up-down filling mode and a left-right filling mode;
calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; the filling offset is the number of pixel points for filling the image to be processed;
zooming the image to be processed based on the calculated zooming size to obtain a zoomed image;
calculating a coordinate mapping relation between coordinates corresponding to pixel points of the zoomed image and coordinates corresponding to pixel points of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on color values of all channels of the image to be processed and the coordinate mapping relation to obtain a target image which is subjected to preprocessing;
the determining a filling mode for image filling of the image to be processed according to the width ratio and the height ratio between the standard model and the image to be processed includes:
calculating the width ratio between the standard model and the image to be processed according to the width of the image to be processed and the width of the standard model;
calculating a height ratio between the standard model and the image to be processed according to the height of the image to be processed and the height of the standard model;
judging whether the height ratio is larger than the width ratio or not;
if so, determining the up-down filling mode as a filling mode for image filling aiming at the image to be processed;
and if not, determining the left and right filling modes as filling modes for filling the image to be processed.
2. The method according to claim 1, wherein if it is determined that the filling manner for image filling of the image to be processed is a vertical filling manner, the scaling size and the filling offset for scaling the image to be processed are calculated according to the following formulas:
Figure P_220926180749824_824941001
Figure P_220926180749856_856705001
Figure P_220926180749872_872368001
wherein the content of the first and second substances,
Figure F_220926180746899_899196001
the zoomed width of the image to be processed is obtained;
Figure F_220926180746977_977332002
is the width of the standard model;
Figure F_220926180747039_039812003
the height of the image to be processed after zooming is obtained;
Figure F_220926180747104_104276004
the width ratio of the standard model to the image to be processed is obtained;
Figure F_220926180747166_166783005
is the height of the image to be processed;
Figure F_220926180747229_229266006
filling offset of each pixel point of the image to be processed corresponding to the up-and-down filling mode;
Figure F_220926180747291_291312007
is the height of the standard model.
3. The method according to claim 1, wherein if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, the scaling size and the filling offset for scaling the image to be processed are calculated according to the following formulas:
Figure P_220926180749905_905053001
Figure P_220926180749936_936289001
Figure P_220926180749967_967566001
wherein the content of the first and second substances,
Figure F_220926180747373_373307008
the zoomed width of the image to be processed is obtained;
Figure F_220926180747537_537377009
the height ratio of the standard model to the image to be processed is obtained;
Figure F_220926180747599_599866010
the width of the image to be processed;
Figure F_220926180747662_662410011
the height of the image to be processed after zooming is obtained;
Figure F_220926180747744_744416012
is the height of the standard model;
Figure F_220926180747791_791273013
is the width of the standard model;
Figure F_220926180747869_869411014
and filling offset of each pixel point of the image to be processed corresponding to the left and right filling modes.
4. The method according to claim 1, wherein if it is determined that the filling manner for image filling of the image to be processed is a vertical filling manner, calculating a coordinate mapping relationship between the scaled image and the image to be processed according to the following formula:
Figure P_220926180750045_045671001
Figure P_220926180750076_076919001
wherein, the first and the second end of the pipe are connected with each other,
Figure F_220926180747940_940713015
the abscissa of the pixel point of the image to be processed is taken as the abscissa;
Figure F_220926180748003_003189016
the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;
Figure F_220926180748065_065698017
the abscissa of the pixel point of the zoomed image is taken as the abscissa;
Figure F_220926180748130_130170018
the vertical coordinate of the pixel point of the zoomed image is taken;
Figure F_220926180748192_192655019
and the ratio of the width of the standard model to the width of the image to be processed is obtained.
5. The method according to claim 1, wherein if it is determined that the filling manner for image filling of the image to be processed is a left-right filling manner, calculating a coordinate mapping relationship between the scaled image and the image to be processed according to the following formula:
Figure P_220926180750109_109663001
Figure P_220926180750125_125750001
wherein the content of the first and second substances,
Figure F_220926180748255_255163020
the abscissa of the pixel point of the image to be processed is taken as the coordinate;
Figure F_220926180748320_320581021
the vertical coordinate of the pixel point of the image to be processed is taken as the vertical coordinate;
Figure F_220926180748383_383097022
the abscissa of the pixel point of the zoomed image is taken as the abscissa;
Figure F_220926180748445_445600023
the vertical coordinate of the pixel point of the zoomed image is taken as the vertical coordinate;
Figure F_220926180748510_510037024
and the ratio of the height of the standard model to the height of the image to be processed is obtained.
6. An image preprocessing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring an image to be processed, the size of the image to be processed and the size of the standard model which are input by a user; the standard model is a model for preprocessing the image to be processed;
a filling mode determining module, configured to determine a filling mode for image filling on the image to be processed according to a width ratio and a height ratio between the standard model and the image to be processed; the filling mode comprises an up-down filling mode and a left-right filling mode;
the calculation module is used for calculating the scaling size and the filling offset for scaling the image to be processed according to the determined filling mode corresponding to the image to be processed; the filling offset is the number of pixel points for filling the image to be processed;
the zooming module is used for zooming the image to be processed based on the calculated zooming size to obtain a zoomed image;
the filling module is used for calculating a coordinate mapping relation between a coordinate corresponding to a pixel point of the zoomed image and a coordinate corresponding to a pixel point of the image to be processed according to the filling offset, and filling the zoomed image in the standard model based on a color value of each channel of the image to be processed and the coordinate mapping relation to obtain a target image which is subjected to preprocessing;
the filling mode determining module is specifically configured to:
calculating the width ratio between the standard model and the image to be processed according to the width of the image to be processed and the width of the standard model;
calculating a height ratio between the standard model and the image to be processed according to the height of the image to be processed and the height of the standard model;
judging whether the height ratio is larger than the width ratio or not;
if so, determining the up-down filling mode as a filling mode for image filling aiming at the image to be processed;
and if not, determining the left and right filling modes as filling modes for filling the image to be processed.
7. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions, when executed by the processor, performing the steps of the image pre-processing method according to any one of claims 1 to 5.
8. A computer storage medium, characterized in that the computer storage medium has stored thereon a computer program which, when being executed by a processor, performs the steps of the image pre-processing method according to any one of claims 1 to 5.
CN202210995819.0A 2022-08-19 2022-08-19 Image preprocessing method and device, electronic equipment and storage medium Active CN115063299B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210995819.0A CN115063299B (en) 2022-08-19 2022-08-19 Image preprocessing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210995819.0A CN115063299B (en) 2022-08-19 2022-08-19 Image preprocessing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115063299A CN115063299A (en) 2022-09-16
CN115063299B true CN115063299B (en) 2022-11-18

Family

ID=83207979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210995819.0A Active CN115063299B (en) 2022-08-19 2022-08-19 Image preprocessing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115063299B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613570A (en) * 2020-12-29 2021-04-06 深圳云天励飞技术股份有限公司 Image detection method, image detection device, equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5389879B2 (en) * 2011-09-20 2014-01-15 株式会社日立製作所 Imaging apparatus, surveillance camera, and camera screen masking method
CN111292245A (en) * 2018-12-07 2020-06-16 北京字节跳动网络技术有限公司 Image processing method and device
CN109934773B (en) * 2019-03-13 2023-08-25 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer readable medium
CN111402228B (en) * 2020-03-13 2021-05-07 腾讯科技(深圳)有限公司 Image detection method, device and computer readable storage medium
CN112215751A (en) * 2020-10-13 2021-01-12 Oppo广东移动通信有限公司 Image scaling method, image scaling device and terminal equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613570A (en) * 2020-12-29 2021-04-06 深圳云天励飞技术股份有限公司 Image detection method, image detection device, equipment and storage medium

Also Published As

Publication number Publication date
CN115063299A (en) 2022-09-16

Similar Documents

Publication Publication Date Title
CN109961507B (en) Face image generation method, device, equipment and storage medium
CN111192292A (en) Target tracking method based on attention mechanism and twin network and related equipment
CN109754359B (en) Pooling processing method and system applied to convolutional neural network
CN108875931B (en) Neural network training and image processing method, device and system
CN116188805B (en) Image content analysis method and device for massive images and image information network
KR20220051162A (en) Visual positioning methods, training methods for related models, and related devices and devices
CN108124489B (en) Information processing method, apparatus, cloud processing device and computer program product
CN109977952B (en) Candidate target detection method based on local maximum
CN107871321B (en) Image segmentation method and device
CN109859314B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and storage medium
CN111008631B (en) Image association method and device, storage medium and electronic device
CN113112542A (en) Visual positioning method and device, electronic equipment and storage medium
CN112802081A (en) Depth detection method and device, electronic equipment and storage medium
CN109377552B (en) Image occlusion calculating method, device, calculating equipment and storage medium
CN113516697A (en) Image registration method and device, electronic equipment and computer-readable storage medium
CN116188917B (en) Defect data generation model training method, defect data generation method and device
CN115063299B (en) Image preprocessing method and device, electronic equipment and storage medium
US20230260211A1 (en) Three-Dimensional Point Cloud Generation Method, Apparatus and Electronic Device
CN112598611A (en) Method and device for synthesizing and identifying embossed bank card number image
CN109461198B (en) Grid model processing method and device
CN111814884A (en) Target detection network model upgrading method based on deformable convolution
CN111860054A (en) Convolutional network training method and device
JP2005339535A (en) Calculation of dissimilarity measure
CN108520259A (en) A kind of extracting method of foreground target, device, equipment and storage medium
CN112288748B (en) Semantic segmentation network training and image semantic segmentation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 711C, 7th Floor, Building A, Building 1, Yard 19, Ronghua Middle Road, Beijing Economic and Technological Development Zone, Beijing 100176

Patentee after: Beijing Zhongke Flux Technology Co.,Ltd.

Address before: Room 711C, 7th Floor, Building A, Building 1, Yard 19, Ronghua Middle Road, Beijing Economic and Technological Development Zone, Beijing 100176

Patentee before: Beijing Ruixin high throughput technology Co.,Ltd.