WO2021008205A1 - 图像处理 - Google Patents

图像处理 Download PDF

Info

Publication number
WO2021008205A1
WO2021008205A1 PCT/CN2020/089298 CN2020089298W WO2021008205A1 WO 2021008205 A1 WO2021008205 A1 WO 2021008205A1 CN 2020089298 W CN2020089298 W CN 2020089298W WO 2021008205 A1 WO2021008205 A1 WO 2021008205A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
reference image
image sequence
score
sequence
Prior art date
Application number
PCT/CN2020/089298
Other languages
English (en)
French (fr)
Inventor
周锴
张睿
Original Assignee
北京三快在线科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京三快在线科技有限公司 filed Critical 北京三快在线科技有限公司
Publication of WO2021008205A1 publication Critical patent/WO2021008205A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation

Definitions

  • the embodiments of the present disclosure relate to image processing.
  • Credentials refer to legal documents such as ID cards, passports, business licenses, etc.
  • some website platforms need to collect document images to complete the entry of document information, and the method of collecting document images is usually performed by users using smart terminals such as mobile phones
  • the quality of the ID image obtained by shooting will inevitably result in a large difference in the quality of the ID image, and in the process of shooting by the user, unnecessary jitter will inevitably occur, resulting in distortion of the captured ID image happening.
  • the embodiments of the present disclosure provide an image processing method, device, electronic equipment, and storage medium to calibrate the true aspect ratio of the image and avoid the distortion of the credential image.
  • an image processing method including: collecting an image sequence corresponding to a photographed object;
  • the step of collecting an image sequence corresponding to a photographed object includes:
  • the image sequence of the photographed object is acquired according to the second photographing parameter.
  • the step of obtaining the reference image in the image sequence includes:
  • the step of obtaining a reference image in the image sequence based on each of the score values includes:
  • the image with the largest score in the image sequence is acquired, and the image with the largest score is used as the reference image.
  • the step of establishing a three-dimensional model based on the reference image and a number of continuous images in the image sequence that are continuous with the reference image includes:
  • the model building algorithm is used to build a three-dimensional model associated with the reference image.
  • the step of determining the aspect ratio of the shooting object according to the three-dimensional model includes:
  • the image quality parameter includes at least one of sharpness, edge retention coefficient, contrast-to-noise ratio, and average corner offset.
  • the method further includes: displaying the corrected image at the user terminal.
  • an image processing apparatus including:
  • the image sequence acquisition module is used to acquire the image sequence corresponding to the shooting object
  • a reference image acquisition module for acquiring a reference image in the image sequence
  • a three-dimensional model building module which is used to build a three-dimensional model based on the reference image and a number of continuous images in the image sequence that are continuous with the reference image;
  • the aspect ratio determining module is configured to determine the aspect ratio of the shooting object according to the three-dimensional model
  • the corrected image acquisition module is configured to perform projection correction on the reference image according to the aspect ratio, and acquire a corrected image corresponding to the shooting object.
  • the image sequence acquisition module includes:
  • the preset area shooting sub-module is used to use a preset camera to shoot the preset area according to the first shooting parameter
  • a shooting parameter generation sub-module configured to adjust the first shooting parameter to generate a second shooting parameter when the shooting object is detected in the shooting area of the camera;
  • the image sequence collection sub-module is used to collect the image sequence of the shooting object according to the second shooting parameter.
  • the reference image acquisition module includes:
  • An image quality parameter acquisition sub-module for acquiring image quality parameters associated with each image in the image sequence
  • the image quality parameter input sub-module is used to input each of the image quality parameters into a pre-trained logistic regression model
  • the image score value receiving sub-module is configured to receive the score value corresponding to each image output by the logistic regression model
  • the reference image acquisition sub-module is configured to acquire the reference image in the image sequence based on each of the score values.
  • the reference image acquisition sub-module includes:
  • the score value acquisition sub-module is used to acquire the first score value corresponding to the first frame image in the image sequence, and the second score value corresponding to the other frame images in the image sequence except the first frame image ;
  • the score value comparison sub-module is used to compare the first score value with a first score threshold, and compare each of the second score values with the second score threshold; wherein, the first score threshold is greater than The second scoring threshold;
  • the offset value calculation sub-module is used to calculate the image sequence in the case where the first score value is greater than the first score threshold and each of the second score values is greater than the second score threshold The mean value of the corner offset of two consecutive images;
  • the reference image selection sub-module is used to obtain the image with the largest score in the image sequence when the absolute value of the mean offset of each corner point is less than the preset offset threshold, and to set the image with the largest score The image serves as the reference image.
  • the three-dimensional model establishment module includes:
  • the reference image input sub-module is used to input the reference image and the several frames of continuous images into a model establishment algorithm
  • a three-dimensional model establishment sub-module is used to establish a three-dimensional model associated with the reference image using the model establishment algorithm.
  • the aspect ratio determining module includes:
  • a corner point coordinate acquisition sub-module which is used to acquire the corner point coordinates of the shooting object output by the model building algorithm
  • the aspect ratio calculation sub-module is used to calculate the aspect ratio of the shooting object according to the corner coordinates.
  • the image quality parameter includes at least one of sharpness, edge retention coefficient, contrast-to-noise ratio, and average corner offset.
  • the device further includes: a corrected image display module, configured to display the corrected image at the user terminal.
  • an electronic device including:
  • a processor a memory, and a computer program stored on the memory and capable of being run on the processor, wherein the processor implements one or more of the above-mentioned image processing methods when the program is executed.
  • a readable storage medium When instructions in the storage medium are executed by a processor of an electronic device, the electronic device can execute one or more of the foregoing Image processing method.
  • the embodiments of the present disclosure provide an image processing method, device, electronic equipment, and storage medium. Acquire the reference image in the image sequence by collecting the image sequence corresponding to the photographed object, build a three-dimensional model based on the reference image and several consecutive frames of the reference image in the image sequence, and determine the aspect ratio of the photographed object according to the three-dimensional model. According to the aspect ratio, the reference image is projected and corrected to obtain the corrected image corresponding to the shooting object.
  • the embodiments of the present disclosure can perform projection correction on the reference image according to the true aspect ratio of the photographing object, so that the true aspect ratio of the reference image can be accurately corrected, and distortion of the image can be avoided.
  • FIG. 1 is a flowchart of steps of an image processing method provided by an embodiment of the present disclosure
  • FIG. 2 is a flowchart of steps of an image processing method provided by an embodiment of the present disclosure
  • Fig. 3 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • Fig. 5 schematically shows a block diagram of a computing processing device for executing a method according to an embodiment of the present disclosure
  • Fig. 6 schematically shows a storage unit for holding or carrying program codes for implementing the method according to the embodiments of the present disclosure.
  • the image processing method may specifically include the following steps:
  • Step 101 Collect an image sequence corresponding to the photographed object.
  • the photographed object refers to the object used for image collection, and the photographed object may be a document to be photographed (such as an ID card, business license, passport, etc.), etc., specifically, it may be determined according to actual conditions. .
  • the photographed object especially refers to the user's certificate.
  • An image sequence refers to a sequence composed of a plurality of consecutive images obtained by continuously photographing a photographing object.
  • the collected image sequence may have 8 images or 10 images, which is not limited in the embodiment of the present disclosure.
  • the device for collecting the image sequence may be collected by a camera equipped on a mobile terminal (such as a mobile phone, etc.), or may be collected by a camera, which is not limited in the embodiments of the present disclosure.
  • step 102 is executed.
  • Step 102 Obtain a reference image in the image sequence.
  • the reference image refers to the image with the highest score in the image sequence. For example, if the image sequence contains image 1, image 2, and image 3, the score of image 1 is 0.6, the score of image 2 is 0.7, and the score of image is 0.4, then Use image 2 as the reference image in the image sequence.
  • the image quality parameters of each image in the image sequence can be acquired, and the image quality parameters can be input into the pre-trained logistic regression model, so that the score value of each image can be obtained, and the score value of each image can be obtained from The reference image is selected from the image sequence.
  • step 103 After obtaining the reference image in the image sequence, step 103 is executed.
  • Step 103 Establish a three-dimensional model according to the reference image and a number of continuous images in the image sequence that are continuous with the reference image.
  • the images contained in the image sequence are: image 1, image 2, image 3, image 4, and image 5. , Image 6, Image 7, Image 8, and Image 9.
  • image 5 is the reference image
  • image 6, image 7, image 3, and image 4 are acquired.
  • the three-dimensional model can be constructed based on the reference image and the consecutive frames of the reference image in the image sequence. Specifically, these images can be input into the model to establish Algorithm to reconstruct a sparse three-dimensional point cloud model. This process will be described in detail in the second embodiment below, which will not be repeated in the embodiment of the present disclosure.
  • step 104 After establishing a three-dimensional model based on the reference image and several consecutive frames of the image sequence that are continuous with the reference image, step 104 is executed.
  • Step 104 Determine the aspect ratio of the subject according to the three-dimensional model.
  • Step 105 Perform projection correction on the reference image according to the aspect ratio, and obtain a corrected image corresponding to the shooting object.
  • Projection correction refers to the use of the true aspect ratio of the subject to correct the aspect ratio of the reference image.
  • the reference image can be projected and corrected according to the real aspect ratio of the subject, so that the aspect ratio of the reference image can be adjusted so that the aspect ratio of the reference image is the same as that of the subject.
  • the aspect ratio is consistent, which can avoid the distortion of the obtained image.
  • the embodiments of the present disclosure perform projection correction on the reference image based on the true aspect ratio of the shooting object, thereby avoiding the problem of distortion in the finally obtained image.
  • the image processing method provided by the embodiments of the present disclosure acquires the reference image in the image sequence by collecting the image sequence corresponding to the photographed object, and establishes a three-dimensional model based on the reference image and several consecutive frames of the image sequence that are continuous with the reference image.
  • the model determines the aspect ratio of the shooting object, and performs projection correction on the reference image according to the aspect ratio, and obtains the corrected image corresponding to the shooting object.
  • the embodiments of the present disclosure can perform projection correction on the reference image according to the true aspect ratio of the shooting object, so that the true aspect ratio of the reference image can be accurately corrected, and distortion of the image can be avoided.
  • the image processing method may specifically include the following steps:
  • Step 201 Use a preset camera to shoot a preset area according to the first shooting parameter.
  • a preset camera refers to a camera used to collect an image corresponding to a photographed subject, and the preset camera may be a mobile terminal (such as a mobile phone) used by a user.
  • the photographed object refers to an object used for image collection, and the photographed object may be an object such as a document to be photographed (such as an ID card, a business license, a passport, etc.), which can be specifically determined according to actual conditions.
  • a document to be photographed such as an ID card, a business license, a passport, etc.
  • the photographed object especially refers to the user's certificate.
  • the preset area refers to the area where the subject is located.
  • the first shooting parameter refers to the shooting parameter used by the user when the subject is not located. Understandably, the shooting parameters usually include ISO (International Standards Organization) sensitivity, aperture Parameters, exposure value and other parameters, specifically, can be determined according to actual conditions.
  • ISO International Standards Organization
  • the preset camera may be collected first to detect and locate the subject, that is, the preset area is photographed according to the first shooting parameter, and then step 202 is performed.
  • Step 202 In the case that the photographing object is detected in the photographing area of the camera, the first photographing parameter is adjusted to generate a second photographing parameter.
  • the frame can be determined as the first image corresponding to the subject.
  • the second shooting parameter refers to a shooting parameter obtained after adjusting the first shooting parameter.
  • the shooting parameters of the preset camera can be adjusted, such as focus, metering, aperture adjustment, etc., to ensure the clarity of the captured subject.
  • the first shooting parameter may be adjusted to generate the second shooting parameter, and then step 203 is executed.
  • Step 203 Collect the image sequence of the photographed object according to the second photographing parameter.
  • An image sequence refers to a sequence composed of a plurality of consecutive images obtained by continuously photographing a photographing object.
  • the collected image sequence may have 8 images or 10 images, which is not limited in the embodiment of the present disclosure.
  • step 204 is executed.
  • Step 204 Obtain image quality parameters associated with each image in the image sequence.
  • the image quality parameter refers to a parameter used to express the image quality obtained by shooting.
  • the image quality parameters may include one or more of parameters such as sharpness, edge preservation coefficient, contrast-to-noise ratio, and corner offset average.
  • Image sharpness refers to the sharpness of the shadow lines and their boundaries on the image.
  • the sharpness of the image can be the sharpness value given by the sharpness of each image after the preset camera captures the image sequence.
  • the edge preservation coefficient refers to the coefficient of the features retained by the contour of the image edge (such as the features of the pixels at the edge of the image).
  • the contrast-to-noise ratio refers to the difference in SNR (signal to noise ratio) between adjacent tissues and structures in an image.
  • the mean corner offset refers to the mean vector of the corner offset. It is understandable that the mean corner offset is calculated from two consecutive images in the image sequence.
  • step 205 is executed.
  • Step 205 Input each of the image quality parameters into a pre-trained logistic regression model.
  • Logistic Regression is a classification model, which belongs to a discriminant model, which can be used to predict whether a user clicks on an advertisement, and to determine the gender of the user.
  • LR to give the score value of each image in the image sequence containing the subject. The higher the score, the more the obtained image meets the requirements, the lower the score, the less the captured image meets the requirements, for example, the image Not all the subjects are included in etc.
  • the image quality parameters associated with each image in the image sequence can be input into the pre-trained LR in turn.
  • the image sequence contains image A and the image quality of image A
  • the parameters are: definition is C, edge retention coefficient is P, average corner offset is SF, contrast-to-noise ratio is R, then C, P, SF, and R are input to LR.
  • step 206 is executed.
  • Step 206 Receive a score value corresponding to each image output by the logistic regression model.
  • the score value refers to the score corresponding to each image in the image sequence.
  • the image sequence includes image 1 and image 2, the score value of image 1 is 0.6, and the score value of image 2 is 0.9.
  • the LR can calculate the score value of each image according to the image quality parameters of each image, and output the score value of each image.
  • the system may receive the score value of each image output by the LR, and execute step 207.
  • Step 207 Obtain a reference image in the image sequence based on each of the score values.
  • the reference image refers to the image with the highest score in the image sequence. For example, if the image sequence contains image 1, image 2, and image 3, the score of image 1 is 0.6, the score of image 2 is 0.7, and the score of image is 0.4, then Use image 2 as the reference image in the image sequence.
  • the reference image After obtaining the score value of each image in the image sequence, the reference image can be selected from the image sequence according to the score value corresponding to each image.
  • the foregoing step 207 may include:
  • Sub-step S1 Obtain a first score value corresponding to a first frame image in the image sequence, and a second score value corresponding to other frame images in the image sequence except the first frame image.
  • the first frame of image refers to a frame of image of the subject that appears for the first time in the process of positioning the subject using a preset camera.
  • the first score value refers to the score value of the first frame of image.
  • the second score value refers to the score value of other frame images in the image sequence except the first frame image.
  • the image sequence includes image 1, image 2 and image 3 in turn, where image 1 is the first image sequence.
  • the score value of image 1 is the first score value
  • the score values of image 2 and image 3 are both the second score value.
  • the first score value of the first frame image output by the LR can be obtained, and the second score value of other frame images output by the LR can be obtained, and the sub-step S2 is executed.
  • Sub-step S2 Compare the first score value with a first score threshold, and compare each of the second score values with the second score threshold; wherein, the first score threshold is greater than the second score threshold. Scoring threshold.
  • the first scoring threshold refers to a threshold preset by the business personnel for comparison with the first scoring value.
  • the second scoring threshold refers to a threshold preset by the business personnel for comparison with the second scoring value.
  • the first scoring threshold may be 0.8, 0.7, 0.6, etc., specifically, it may be determined according to business requirements, and the embodiment of the present disclosure does not limit the specific value of the first scoring threshold.
  • the second scoring threshold may be 0.6, 0.5, 0.4, etc., specifically, it may be determined according to business requirements, and the embodiment of the present disclosure does not limit the specific value of the second scoring threshold.
  • the first scoring threshold is greater than the second scoring threshold.
  • the second scoring threshold must be less than the first scoring threshold, and can be 0.7, 0.65, and so on.
  • the first score value can be compared with the first score threshold, and Each second scoring value is compared with the second scoring threshold, and then sub-step S3 is executed.
  • Sub-step S3 In the case where the first score value is greater than the first score threshold, and each of the second score values is greater than the second score threshold, calculate the value of the two consecutive frames in the image sequence Mean value of corner offset.
  • the first scoring value when the first scoring value is less than or equal to the first scoring threshold, or each of the second scoring values is less than or equal to the second scoring threshold, it means that the image sequence of the captured subject does not meet the requirements and needs to be renewed. Collect the image sequence corresponding to the subject.
  • the mean value of the corner offsets of two adjacent frames in the image sequence can be calculated, for example, the image sequence It contains image 1, image 2, image 3, and image 4 in sequence.
  • the image sequence meets the conditions, calculate the mean value of corner offset of image 1 and image 2, and the mean value of corner offset of image 2 and image 3, image 3 And the mean value of the corner offset of image 4.
  • sub-step S4 is executed.
  • Sub-step S4 in the case that the absolute value of the mean value of each corner point offset is less than the preset offset threshold, obtain the image with the largest score in the image sequence, and use the image with the largest score as the Reference image.
  • the preset offset threshold refers to a threshold preset by the business personnel for comparison with the average value of the corner offsets of two adjacent frames of images in the image sequence.
  • the preset offset threshold may be 8, 6, or 5, etc., specifically, it may be determined according to business requirements, and the specific value of the preset offset threshold is not limited in the embodiment of the present disclosure.
  • the image with the largest score in the image sequence can be obtained, and the image with the largest score can be used as the reference image.
  • the image sequence includes image 1 , Image 2, Image 3, and Image 4.
  • Image 1 has a score of 0.5
  • image 2 has a score of 0.6
  • image 3 has a score of 0.8
  • image 4 has a score of 0.7, so image 3 is used as the reference image.
  • step 208 After obtaining the reference image from the image sequence based on the score value of each image in the image sequence, step 208 is executed.
  • Step 208 Input the reference image and the several frames of continuous images into the model establishment algorithm.
  • the images contained in the image sequence are: image 1, image 2, image 3, image 4, and image 5. , Image 6, Image 7, Image 8, and Image 9.
  • image 5 is the reference image
  • image 6, image 7, image 3, and image 4 are acquired.
  • the model building algorithm is an offline algorithm for three-dimensional reconstruction based on various collected disordered pictures.
  • the model building algorithm is used to input the reference image and several continuous images in the image sequence.
  • step 209 is executed.
  • Step 209 Use the model establishment algorithm to establish a three-dimensional model associated with the reference image.
  • the model establishment algorithm can establish a three-dimensional model associated with the reference image based on the reference image and several frames of continuous image.
  • the model building algorithm may be the SFM algorithm.
  • SFM algorithm the process of using the SFM algorithm to build a three-dimensional model in detail.
  • the focal length information from the reference image and several frames of continuous images (required for BA initialization), then use feature extraction algorithms such as SIFT to extract image features, and use the kd-tree model to calculate the Euclidean distance between the feature points of the two images. Matching of feature points, so as to find image pairs with matching number of feature points. For each image matching pair, calculate the epipolar geometry, estimate the F matrix and optimize the matching pair through the ransac algorithm. If there are feature points that can be chained in such a matching pair and have been detected, then a trajectory can be formed. After entering the structure-from-motion part, the key first step is to select two image pairs to initialize the entire BA process.
  • feature extraction algorithms such as SIFT to extract image features
  • the kd-tree model to calculate the Euclidean distance between the feature points of the two images. Matching of feature points, so as to find image pairs with matching number of feature points.
  • For each image matching pair calculate the epipolar geometry, estimate the F matrix and optimize the matching pair through the ran
  • step 210 is performed.
  • Step 210 Obtain the corner coordinates of the shooting object output by the model building algorithm.
  • the model building algorithm can output the corner coordinates of the shooting object according to the established three-dimensional model.
  • the system may execute step 211 after receiving the corner coordinates of the shooting object output by the model building algorithm.
  • Step 211 Calculate the aspect ratio of the shooting object according to the corner coordinates.
  • the aspect ratio refers to the ratio of the length to the width of the subject, that is, the true aspect ratio of the subject.
  • the true aspect ratio of the subject can be calculated according to the corner coordinates of the subject.
  • the coordinates of the four corners of the subject are: (0, 0, 0), (3 ,2,0), (0,2,0) and (3,0,0), it can be seen that the length of the subject is 3, the width is 2, and the aspect ratio of the subject is 3:2 .
  • step 212 After calculating the true aspect ratio of the shooting object according to the corner coordinates, step 212 is executed.
  • Step 212 Perform projection correction on the reference image according to the aspect ratio, and obtain a corrected image corresponding to the shooting object.
  • Projection correction refers to the use of the true aspect ratio of the subject to correct the aspect ratio of the reference image.
  • the reference image can be projected and corrected according to the real aspect ratio of the subject, so that the aspect ratio of the reference image can be adjusted so that the aspect ratio of the reference image is the same as that of the subject.
  • the aspect ratio is consistent, which can avoid the distortion of the obtained image.
  • the obtained corrected image can be displayed in the display interface of the user terminal to provide the user with an undistorted calibration image that can be used by the user, such as certificate verification, information recognition, and so on.
  • the embodiments of the present disclosure perform projection correction on the reference image based on the true aspect ratio of the shooting object, thereby avoiding the problem of distortion in the finally obtained image.
  • the image processing method provided by the embodiments of the present disclosure can also use the method of fusion of multiple image quality parameters to obtain the reference image. Compared with the single evaluation standard, The optimal frame image can be captured more accurately.
  • the image processing apparatus may specifically include the following modules:
  • the image sequence acquisition module 310 is used to acquire the image sequence corresponding to the photographed object
  • a reference image acquisition module 320 configured to acquire a reference image in the image sequence
  • a three-dimensional model establishment module 330 configured to establish a three-dimensional model based on the reference image and a number of continuous images in the image sequence that are continuous with the reference image;
  • the aspect ratio determining module 340 is configured to determine the aspect ratio of the shooting object according to the three-dimensional model
  • the corrected image acquisition module 350 is configured to perform projection correction on the reference image according to the aspect ratio, and acquire a corrected image corresponding to the shooting object.
  • the image processing device acquires the reference image in the image sequence by collecting the image sequence corresponding to the photographed object, and establishes a three-dimensional model based on the reference image and several consecutive frames in the image sequence that are continuous with the reference image.
  • the model determines the aspect ratio of the shooting object, and performs projection correction on the reference image according to the aspect ratio, and obtains the corrected image corresponding to the shooting object.
  • the embodiments of the present disclosure can perform projection correction on the reference image according to the true aspect ratio of the shooting object, so that the true aspect ratio of the reference image can be accurately corrected, and distortion of the image can be avoided.
  • the image processing apparatus may specifically include the following modules:
  • the image sequence acquisition module 410 is used to acquire an image sequence corresponding to the photographed object
  • a reference image acquisition module 420 configured to acquire a reference image in the image sequence
  • a three-dimensional model establishment module 430 configured to establish a three-dimensional model according to the reference image and a number of consecutive images in the image sequence that are continuous with the reference image;
  • the aspect ratio determining module 440 is configured to determine the aspect ratio of the shooting object according to the three-dimensional model
  • the corrected image acquisition module 450 is configured to perform projection correction on the reference image according to the aspect ratio, and acquire a corrected image corresponding to the shooting object.
  • the image sequence acquisition module 410 includes:
  • the preset area shooting sub-module 411 is configured to use a preset camera to shoot the preset area according to the first shooting parameter;
  • a shooting parameter generation sub-module 412 is configured to adjust the first shooting parameter to generate a second shooting parameter when the shooting object is detected in the shooting area of the camera;
  • the reference image acquisition module 420 includes:
  • the image quality parameter input sub-module 422 is configured to input each of the image quality parameters into a pre-trained logistic regression model
  • the image score value receiving sub-module 423 is configured to receive the score value corresponding to each image output by the logistic regression model
  • the reference image acquisition sub-module 424 is configured to acquire a reference image in the image sequence based on each of the score values.
  • the reference image acquisition submodule 424 includes:
  • the score value acquisition sub-module is used to acquire the first score value corresponding to the first frame image in the image sequence, and the second score value corresponding to the other frame images in the image sequence except the first frame image ;
  • the score value comparison sub-module is used to compare the first score value with a first score threshold, and compare each of the second score values with the second score threshold; wherein, the first score threshold is greater than The second scoring threshold;
  • the offset value calculation sub-module is used to calculate the image sequence in the case where the first score value is greater than the first score threshold and each of the second score values is greater than the second score threshold The mean value of the corner offset of two consecutive images;
  • the reference image selection sub-module is used to obtain the image with the largest score in the image sequence when the absolute value of the mean offset of each corner point is less than the preset offset threshold, and to set the image with the largest score The image serves as the reference image.
  • the reference image input sub-module 431 is configured to input the reference image and the several frames of continuous images into a model establishment algorithm
  • the aspect ratio determining module 440 includes:
  • the image quality parameter includes at least one of sharpness, edge retention coefficient, contrast-to-noise ratio, and average corner offset.
  • the device further includes: a corrected image display module, configured to display the corrected image at the user terminal.
  • the image processing device provided by the embodiment of the present disclosure in addition to the beneficial effects of the image processing device provided in the third embodiment, can also use the method of fusion of multiple image quality parameters to obtain the reference image. Compared with the single evaluation standard, The optimal frame image can be captured more accurately.
  • the device embodiments described above are merely illustrative.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement it without creative work.
  • the various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by their combination.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in the computing processing device according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for realizing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals. Such signals can be downloaded from Internet websites, or provided on carrier signals, or provided in any other form.
  • FIG. 5 shows a computing processing device that can implement the method according to the present invention.
  • the computing processing device traditionally includes a processor 1010 and a computer program product in the form of a memory 1020 or a computer readable medium.
  • the memory 1020 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the memory 1020 has a storage space 1030 for executing program codes 1031 of any method steps in the above methods.
  • the storage space 1030 for program codes may include various program codes 1031 for implementing various steps in the above method. These program codes can be read out from or written into one or more computer program products.
  • These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks. Such computer program products are usually portable or fixed storage units as described with reference to FIG. Y6.
  • the storage unit may have storage segments, storage spaces, etc., arranged similarly to the storage 1020 in the computing processing device of FIG. 5.
  • the program code can be compressed in an appropriate form, for example.
  • the storage unit includes computer-readable codes 1031', that is, codes that can be read by, for example, a processor such as 1010. These codes, when run by a computing processing device, cause the computing processing device to execute the method described above. The various steps.
  • any reference signs placed between parentheses should not be constructed as a limitation to the claims.
  • the word “comprising” does not exclude the presence of elements or steps not listed in the claims.
  • the word “a” or “an” preceding an element does not exclude the presence of multiple such elements.
  • the invention can be implemented by means of hardware comprising several different elements and by means of a suitably programmed computer. In the unit claims enumerating several devices, several of these devices may be embodied by the same hardware item.
  • the use of the words first, second, and third, etc. do not indicate any order. These words can be interpreted as names.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

本公开实施例提供了一种图像处理方法、装置、电子设备及存储介质。所述方法包括:采集拍摄对象对应的图像序列;获取所述图像序列中的基准图像;依据所述基准图像和所述图像序列中与所述基准图像连续的若干帧连续图像,建立三维模型;依据所述三维模型,确定所述拍摄对象的长宽比;依据所述长宽比对所述基准图像进行投影校正,获取所述拍摄对象对应的校正图像。

Description

图像处理
本申请要求在2019年7月16日提交中国专利局、申请号为201910642545.5、发明名称为“一种图像处理方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开的实施例涉及图像处理。
背景技术
证件是指如身份证、护照、营业执照等法定证件,目前,某些网站平台需要采集证件图像,完成证件信息的录入,而采集证件图像的方式通常是由用户采用手机等智能终端对证件进行拍摄,得到证件图像,而由于用户拍摄水平有限,必然会造成拍摄得到的证件图像的质量差异较大,且在用户拍摄过程中,势必会产生不必要的抖动,导致拍摄的证件图像产生畸变的情况。
发明内容
本公开实施例提供了一种图像处理方法、装置、电子设备及存储介质,用以校准图像的真实长宽比,避免证件图像出现畸变的情况。
根据本公开的实施例的第一方面,提供了一种图像处理方法,包括:采集拍摄对象对应的图像序列;
获取所述图像序列中的基准图像;
依据所述基准图像和所述图像序列中与所述基准图像连续的若干帧连续图像,建立三维模型;
依据所述三维模型,确定所述拍摄对象的长宽比;
依据所述长宽比对所述基准图像进行投影校正,获取所述拍摄对象对应的校正图像。
在本公开的一种具体实现中,所述采集拍摄对象对应的图像序列的步骤,包括:
采用预置摄像头,按照第一拍摄参数对预设区域进行拍摄;
在所述摄像头的拍摄区域内检测到所述拍摄对象的情况下,对所述第一 拍摄参数进行调整,生成第二拍摄参数;
按照第二拍摄参数采集所述拍摄对象的图像序列。
在本公开的一种具体实现中,所述获取所述图像序列中的基准图像的步骤,包括:
获取与所述图像序列中的各图像关联的图像质量参数;
将各所述图像质量参数输入预先训练好的逻辑回归模型;
接收由所述逻辑回归模型输出的各所述图像对应的评分值;
基于各所述评分值,获取所述图像序列中的基准图像。
在本公开的一种具体实现中,所述基于各所述评分值,获取所述图像序列中的基准图像的步骤,包括:
获取所述图像序列中第一帧图像对应的第一评分值,及所述图像序列中除所述第一帧图像之外的其它帧图像对应的第二评分值;
将所述第一评分值与第一评分阈值进行比较,及各所述第二评分值与所述第二评分阈值进行比较;其中,所述第一评分阈值大于所述第二评分阈值;
在所述第一评分值大于所述第一评分阈值,且各所述第二评分值均大于所述第二评分阈值的情况下,计算所述图像序列中相连两帧图像的角点偏移均值;
在各所述角点偏移均值的绝对值小于预设偏移阈值的情况下,获取所述图像序列中评分值最大的图像,并将所述评分值最大的图像作为所述基准图像。
在本公开的一种具体实现中,所述依据所述基准图像和所述图像序列中与所述基准图像连续的若干帧连续图像,建立三维模型的步骤,包括:
将所述基准图像和所述若干帧连续图像输入模型建立算法;
采用所述模型建立算法建立与所述基准图像关联的三维模型。
在本公开的一种具体实现中,所述依据所述三维模型,确定所述拍摄对象的长宽比的步骤,包括:
获取由所述模型建立算法输出的所述拍摄对象的角点坐标;
依据所述角点坐标计算所述拍摄对象的长宽比。
在本公开的一种具体实现中,所述图像质量参数包括清晰度、边缘保持系数、对比度噪声比和角点偏移均值中的至少一种。
在本公开的一种具体实现中,所述方法还包括:将所述校正图像在用户终端处展示。
根据本公开的实施例的第二方面,提供了一种图像处理装置,包括:
图像序列采集模块,用于采集拍摄对象对应的图像序列;
基准图像获取模块,用于获取所述图像序列中的基准图像;
三维模型建立模块,用于依据所述基准图像和所述图像序列中与所述基准图像连续的若干帧连续图像,建立三维模型;
长宽比确定模块,用于依据所述三维模型,确定所述拍摄对象的长宽比;
校正图像获取模块,用于依据所述长宽比对所述基准图像进行投影校正,获取所述拍摄对象对应的校正图像。
在本公开的一种具体实现中,所述图像序列采集模块包括:
预设区域拍摄子模块,用于采用预置摄像头,按照第一拍摄参数对预设区域进行拍摄;
拍摄参数生成子模块,用于在所述摄像头的拍摄区域内检测到所述拍摄对象的情况下,对所述第一拍摄参数进行调整,生成第二拍摄参数;
图像序列采集子模块,用于按照第二拍摄参数采集所述拍摄对象的图像序列。
在本公开的一种具体实现中,所述基准图像获取模块包括:
图像质量参数获取子模块,用于获取与所述图像序列中的各图像关联的图像质量参数;
图像质量参数输入子模块,用于将各所述图像质量参数输入预先训练好的逻辑回归模型;
图像评分值接收子模块,用于接收由所述逻辑回归模型输出的各所述图像对应的评分值;
基准图像获取子模块,用于基于各所述评分值,获取所述图像序列中的基准图像。
在本公开的一种具体实现中,所述基准图像获取子模块包括:
评分值获取子模块,用于获取所述图像序列中第一帧图像对应的第一评分值,及所述图像序列中除所述第一帧图像之外的其它帧图像对应的第二评分值;
评分值比较子模块,用于将所述第一评分值与第一评分阈值进行比较,及各所述第二评分值与所述第二评分阈值进行比较;其中,所述第一评分阈值大于所述第二评分阈值;
偏移值计算子模块,用于在所述第一评分值大于所述第一评分阈值,且 各所述第二评分值均大于所述第二评分阈值的情况下,计算所述图像序列中相连两帧图像的角点偏移均值;
基准图像选取子模块,用于在各所述角点偏移均值的绝对值小于预设偏移阈值的情况下,获取所述图像序列中评分值最大的图像,并将所述评分值最大的图像作为所述基准图像。
在本公开的一种具体实现中,所述三维模型建立模块包括:
基准图像输入子模块,用于将所述基准图像和所述若干帧连续图像输入模型建立算法;
三维模型建立子模块,用于采用所述模型建立算法建立与所述基准图像关联的三维模型。
在本公开的一种具体实现中,所述长宽比确定模块包括:
角点坐标获取子模块,用于获取由所述模型建立算法输出的所述拍摄对象的角点坐标;
长宽比计算子模块,用于依据所述角点坐标计算所述拍摄对象的长宽比。
在本公开的一种具体实现中,所述图像质量参数包括清晰度、边缘保持系数、对比度噪声比和角点偏移均值中的至少一种。
在本公开的一种具体实现中,所述装置还包括:校正图像展示模块,用于将所述校正图像在用户终端处展示。
根据本公开的实施例的第三方面,提供了一种电子设备,包括:
处理器、存储器以及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现上述一个或多个所述的图像处理方法。
根据本公开的实施例的第四方面,提供了一种可读存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行上述一个或多个所述的图像处理方法。
本公开实施例提供了一种图像处理方法、装置、电子设备及存储介质。通过采集拍摄对象对应的图像序列,获取图像序列中的基准图像,依据基准图像和图像序列中与基准图像连续的若干帧连续图像,建立三维模型,依据三维模型,确定拍摄对象的长宽比,并依据长宽比对基准图像进行投影校正,获取拍摄对象对应的校正图像。本公开实施例可以根据拍摄对象的真实长宽比对基准图像进行投影校正,从而可以准确校正基准图像真实长宽比,能够 避免图像产生畸变的情况。
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本公开实施例提供的一种图像处理方法的步骤流程图;
图2是本公开实施例提供的一种图像处理方法的步骤流程图;
图3是本公开实施例提供的一种图像处理装置的结构示意图;
图4是本公开实施例提供的一种图像处理装置的结构示意图;
图5示意性地示出了用于执行根据本公开实施例的方法的计算处理设备的框图;以及
图6示意性地示出了用于保持或者携带实现根据本公开实施例的方法的程序代码的存储单元。
具体实施例
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
实施例一
参照图1,示出了本公开实施例提供的一种图像处理方法的步骤流程图,该图像处理方法具体可以包括如下步骤:
步骤101:采集拍摄对象对应的图像序列。
在本公开实施例中,拍摄对象是指用于进行图像采集的对象,拍摄对象可以是待拍摄的证件(如身份证、营业执照、护照等)等对象,具体地,可 以根据实际情况而定。
当然,在本公开中,拍摄对象尤其是指用户的证件。
图像序列是指对拍摄对象进行连续拍摄得到的连续的多个图像组成的序列,对于采集的图像序列可以有8张图像,也可以有10张图像,本公开实施例对此不加以限制。
采集图像序列的设备可以是采用移动终端(如手机等)上配备的摄像头进行的采集,也可以时采用相机进行的采集,本公开实施例对此也不加以限制。
在采集拍摄对象对应的图像序列时,可以先使用图像采集设备对拍摄对象进行定位,并在定位之后再对图像采集设备的拍摄参数进行调整,并执行拍摄功能,以获取拍摄对象对应的图像序列。
对于图像序列的采集过程将在下述实施例二中进行详细描述,本公开实施例在此不再加以赘述。
在采集得到拍摄对象对应的图像序列之后,执行步骤102。
步骤102:获取所述图像序列中的基准图像。
基准图像是指图像序列中评分最高的一帧图像,例如,图像序列中包含图像1、图像2和图像3,图像1的评分为0.6,图像2的评分为0.7,图像的评分为0.4,则将图像2作为图像序列中的基准图像。
可以理解地,上述示例仅是为了更好地理解本公开实施例的技术方案而列举的示例,不作为对本公开实施例的唯一限制。
在获取图像序列之后,可以将获取图像序列中各图像的图像质量参数,并将图像质量参数输入预先训练好的逻辑回归模型,从而可以得到各图像的评分值,并根据各图像的评分值从图像序列中选择出基准图像。
对于上述获取基准图像的过程将在下述实施例二中进行详细描述,本公开实施例在此不再加以赘述。
在获取图像序列中的基准图像之后,执行步骤103。
步骤103:依据所述基准图像和所述图像序列中与所述基准图像连续的若干帧连续图像,建立三维模型。
在获取图像序列中的基准图像之后,可以获取图像序列中与基准图像连续的若干帧连续图像,例如,图像序列中包含的图像依次为:图像1、图像 2、图像3、图像4、图像5、图像6、图像7、图像8和图像9,在图像5为基准图像时,在需要获取5帧图像时,则获取图像6、图像7、图像3和图像4。
可以理解地,上述示例仅是为了更好地理解本公开实施例的技术方案而列举的示例,不作为对本公开实施例的唯一限制。
在获取基准图像和图像序列中与基准图像连续的若干帧连续图像之后,可以依据基准图像和图像序列中与基准图像连续的若干帧连续图像构建三维模型,具体地,可以将这些图像输入模型建立算法,重建稀疏三维点云模型,对于此过程将在下述实施例二中进行详细描述,本公开实施例在此不再加以赘述。
在依据基准图像和图像序列中与基准图像连续的若干帧连续图像,建立三维模型之后,执行步骤104。
步骤104:依据所述三维模型,确定所述拍摄对象的长宽比。
长宽比是指拍摄对象的真实长宽比例,例如,拍摄对象为身份证时,长宽比即为身份证的长宽比例,例如,在身份证的长度为10cm,宽度为8cm时,身份证的真实长宽比即为10:8=5:4。
可以理解地,上述示例仅是为了更好地理解本公开实施例的技术方案而列举的示例,不作为对本公开实施例的唯一限制。
在根据基准图像和图像序列中与基准图像连续的若干帧连续图像建立三维模型之后,可以根据三维模型,获取拍摄对象的角点坐标,进而根据角点坐标计算得到拍摄对象的真实长宽比。
对于上述根据三维模型确定拍摄对象的长宽比的具体实现过程将在下述实施例二中进行详细描述,本公开实施例在此不再加以赘述。
在依据三维模型确定拍摄对象的长宽比之后,执行步骤105。
步骤105:依据所述长宽比对所述基准图像进行投影校正,获取所述拍摄对象对应的校正图像。
投影校正是指采用拍摄对象的真实长宽比例对基准图像的长宽比例进行校正。
在得到拍摄对象的真实长宽比之后,可以根据拍摄对象的真实长宽比对基准图像进行投影校正,从而对基准图像的长宽比进行调整,以使基准图像 的长宽比与拍摄对象的长宽比一致,进而可以避免得到的图像存在畸变的情况。
本公开实施例通过拍摄对象的真实长宽比对基准图像进行投影校正,从而可以避免最终得到的图像存在畸变的问题。
本公开实施例提供的图像处理方法,通过采集拍摄对象对应的图像序列,获取图像序列中的基准图像,依据基准图像和图像序列中与基准图像连续的若干帧连续图像,建立三维模型,依据三维模型,确定拍摄对象的长宽比,并依据长宽比对基准图像进行投影校正,获取拍摄对象对应的校正图像。本公开实施例可以根据拍摄对象的真实长宽比对基准图像进行投影校正,从而可以准确校正基准图像真实长宽比,能够避免图像产生畸变的情况。
实施例二
参照图2,示出了本公开实施例提供的一种图像处理方法的步骤流程图,该图像处理方法具体可以包括如下步骤:
步骤201:采用预置摄像头,按照第一拍摄参数对预设区域进行拍摄。
在本公开实施例中,预置摄像头是指用于采集拍摄对象对应的图像的摄像头,预置摄像头可以是用户使用的移动终端(如手机等)。
拍摄对象是指用于进行图像采集的对象,拍摄对象可以是待拍摄的证件(如身份证、营业执照、护照等)等对象,具体地,可以根据实际情况而定。
当然,在本公开中,拍摄对象尤其是指用户的证件。
预设区域是指拍摄对象所处的区域,第一拍摄参数是指用户在未定位到拍摄对象时所使用的拍摄参数,可以理解地,拍摄参数通常包括ISO(International Standards Organization)感光度、光圈参数、曝光值等参数,具体地,可以根据实际情况而定。
在用户使用预置摄像头对拍摄对象所处的区域拍摄视频时,可以先采集预置摄像头对拍摄对象进行检测定位,即按照第一拍摄参数对预设区域进行拍摄,进而,执行步骤202。
步骤202:在所述摄像头的拍摄区域内检测到所述拍摄对象的情况下,对所述第一拍摄参数进行调整,生成第二拍摄参数。
视频拍摄过程中,在某一帧首次定位到拍摄对象时,可以将该帧确定为拍摄对象对应的首帧图像。
第二拍摄参数是指对第一拍摄参数进行调整之后得到的拍摄参数。
在定位到拍摄对象所对应的首帧图像时,则可以对预置摄像头的拍摄参数进行调整,如对焦、测光、光圈调整等,以确保采集的拍摄对象的清晰度。
在摄像头的拍摄区域内检测到拍摄对象之后,可以对第一拍摄参数进行调整,生成第二拍摄参数,进而,执行步骤203。
步骤203:按照第二拍摄参数采集所述拍摄对象的图像序列。
图像序列是指对拍摄对象进行连续拍摄得到的连续的多个图像组成的序列,对于采集的图像序列可以有8张图像,也可以有10张图像,本公开实施例对此不加以限制。
在对预置摄像头的拍摄参数进行调整之后,可以继续使用预置摄像头,按照调整得到的第二拍摄参数对拍摄对象进行视频采集过程,以得到拍摄对象对应的图像序列。
可以理解地,在对拍摄对象采集视频时,可以根据采集的视频,按照拍摄对象出现首帧在视频中所处的位置,提取出后续拍摄的视频,并从后续拍摄的视频中提取得到拍摄对象对应的图像序列。
在按照第二拍摄参数采集拍摄对象的图像序列之后,执行步骤204。
步骤204:获取与所述图像序列中的各图像关联的图像质量参数。
图像质量参数是指用于表示拍摄得到的图像质量的参数。
在本公开中,图像质量参数可以包括清晰度、边缘保持系数、对比度噪声比和角点偏移均值等参数中的一种或多种。
图像清晰度是指影像上各细部影纹及其边界的清晰程度,图像清晰度可以是在预置摄像头采集得到图像序列之后,根据每张图像的清晰程度而给出的清晰度值。
边缘保持系数是指图像边缘轮廓保留的特征(如图像边缘的像素点等特征)的系数。
对比度噪声比是指图像中相邻组织、结构间的SNR(信噪比)的差异。
角点偏移均值是指角点偏移的均值向量,可以理解地,角点偏移均值是通过图像序列中连续的两张图像计算得到的。
当然,在实际应用中,图像质量参数还可以结合其它参数,本公开提及的上述几种参数仅是为了更好地理解本公开实施例的技术方案而列举的几 种参数,不作为对本公开实施例的唯一限制。
可以理解地,上述几种图像质量参数均为本领域已经比较熟知的参数,而对于获取图像序列中每张图像的图像质量参数的具体过程,可以参见现有技术中的获取方式,本公开实施例在此不再加以详细描述。
在获取图像序列中的各图像关联的图像质量参数之后,执行步骤205。
步骤205:将各所述图像质量参数输入预先训练好的逻辑回归模型。
逻辑回归模型(Logistic Regression,LR)是一种分类模型,属于一种判别模型,可以用于预测一个用户是否点击广告,判别用户性别等。
在本公开中,旨在使用LR给出图像序列中各图像包含拍摄对象的评分值,评分越高表示所得到的图像越符合要求,评分越低表示拍摄的图像越不符合要求,例如,图像中未将拍摄对象全部包含等。
在得到图像序列中各图像关联的图像质量参数之后,可以针对每张图像,依次将该张图像的图像质量参数输入预先训练好的LR,例如,图像序列中包含图像A,图像A的图像质量参数为:清晰度为C,边缘保持系数为P,角点偏移均值为SF、对比度噪声比为R,则将C、P、SF、R输入至LR。
可以理解地,上述示例仅是为了更好地理解本公开实施例的技术方案而列举的示例,不作为对本公开实施例的唯一限制。
在将各图像质量参数输入预先训练好的逻辑回归模型之后,执行步骤206。
步骤206:接收由所述逻辑回归模型输出的各所述图像对应的评分值。
评分值是指图像序列中每张图像所对应的评分,例如,图像序列中包括图像1和图像2,图像1的评分值为0.6,图像2的评分值为0.9等。
在将各图像的图像质量参数输入预先训练好的LR之后,可以由LR根据每张图像的图像质量参数,统计得到每张图像的评分值,并输出每张图像的评分值。
进而,系统可以接收由LR输出的每张图像的评分值,并执行步骤207。
步骤207:基于各所述评分值,获取所述图像序列中的基准图像。
基准图像是指图像序列中评分最高的一帧图像,例如,图像序列中包含图像1、图像2和图像3,图像1的评分为0.6,图像2的评分为0.7,图像的评分为0.4,则将图像2作为图像序列中的基准图像。
可以理解地,上述示例仅是为了更好地理解本公开实施例的技术方案而列举的示例,不作为对本公开实施例的唯一限制。
在获取图像序列中各图像的评分值之后,可以根据各图像对应的评分值,从图像序列中选择出基准图像。
而对于基准图像的选择过程可以参照下述具体实现方式的详细描述。
在本公开的一种具体实现中,上述步骤207可以包括:
子步骤S1:获取所述图像序列中第一帧图像对应的第一评分值,及所述图像序列中除所述第一帧图像之外的其它帧图像对应的第二评分值。
在本公开实施例中,第一帧图像是指在采用预置摄像头对拍摄对象进行定位的过程中,首次出现拍摄对象的一帧图像。
第一评分值是指第一帧图像的评分值。
第二评分值是指图像序列中除第一帧图像之外的其它帧图像的评分值,例如,图像序列依次包括图像1、图像2和图像3,其中,图像1为图像序列中的第一帧图像,图像1的评分值即为第一评分值,图像2和图像3的评分值均为第二评分值。
在识别出图像序列中第一帧图像之后,可以获取由LR输出的第一帧图像的第一评分值,并获取由LR输出的其它帧图像的第二评分值,并执行子步骤S2。
子步骤S2:将所述第一评分值与第一评分阈值进行比较,及各所述第二评分值与所述第二评分阈值进行比较;其中,所述第一评分阈值大于所述第二评分阈值。
第一评分阈值是指由业务人员预先设置的与第一评分值进行比较的阈值。
第二评分阈值是指由业务人员预先设置的与第二评分值进行比较的阈值。
第一评分阈值可以为0.8、0.7或0.6等等,具体地,可以根据业务需求而定,本公开实施例对于第一评分阈值的具体数值不加以限制。
第二评分阈值可以为0.6、0.5或0.4等等,具体地,可以根据业务需求而定,本公开实施例对于第二评分阈值的具体数值不加以限制。
其中,第一评分阈值是大于第二评分阈值的,例如,在第一评分阈值为 0.8时,则第二评分阈值必然是小于第一评分阈值的,可以为0.7、0.65等等。
可以理解地,上述示例仅是为了更好地理解本公开实施例的技术方案而列举的示例,不作为对本公开实施例的唯一限制。
在获取第一帧图像的第一评分值,及图像序列中除第一帧图像之外的其它帧图像的第二评分值之后,可以将第一评分值与第一评分阈值进行比较,并将各第二评分值与第二评分阈值进行比较,进而,执行子步骤S3。
子步骤S3:在所述第一评分值大于所述第一评分阈值,且各所述第二评分值均大于所述第二评分阈值的情况下,计算所述图像序列中相连两帧图像的角点偏移均值。
在本公开中,在第一评分值小于等于第一评分阈值,或各第二评分值中存在小于或等于第二评分阈值的情况下,表示采集的拍摄对象的图像序列不符合要求,需要重新采集拍摄对象对应的图像序列。
而在第一评分值大于第一评分阈值,且各第二评分值均大于第二评分阈值的情况下,则可以计算图像序列中相邻两帧图像的角点偏移均值,例如,图像序列中依次包含图像1、图像2、图像3和图像4,在图像序列满足条件时,则计算图像1和图像2的角点偏移均值,图像2和图像3的角点偏移均值,图像3和图像4的角点偏移均值。
可以理解地,上述示例仅是为了更好地理解本公开实施例的技术方案而列举的示例,不作为对本公开实施例的唯一限制。
在计算得到图像序列中相连两帧图像的角点偏移均值之后,执行子步骤S4。
子步骤S4:在各所述角点偏移均值的绝对值小于预设偏移阈值的情况下,获取所述图像序列中评分值最大的图像,并将所述评分值最大的图像作为所述基准图像。
预设偏移阈值是指由业务人员预先设置的与图像序列中相邻两帧图像的角点偏移均值进行比较的阈值。
预设偏移阈值可以为8、6或5等等,具体地,可以根据业务需求而定,本公开实施例对于预设偏移阈值的具体数值不加以限制。
在图像序列中相邻两帧图像的角点偏移均值的绝对值小于预设偏移阈值的情况下,表示图像序列中相邻两帧图像的角点偏移较小,表示这两帧图 像符合预设偏移阈值条件。
在各角点偏移均值的绝对值均小于预设偏移阈值时,则可以获取图像序列中评分值最大的图像,并将评分值最大的图像作为基准图像,例如,图像序列中包括图像1、图像2、图像3和图像4,图像1的评分值为0.5,图像2的评分值为0.6,图像3的评分值为0.8,图像4的评分值为0.7,则将图像3作为基准图像。
可以理解地,上述示例仅是为了更好地理解本公开实施例的方案而列举的示例,不作为对本公开实施例的唯一限制。
在基于图像序列中的各图像的评分值,从图像序列中获取基准图像之后,执行步骤208。
步骤208:将所述基准图像和所述若干帧连续图像输入模型建立算法。
在获取图像序列中的基准图像之后,可以获取图像序列中与基准图像连续的若干帧连续图像,例如,图像序列中包含的图像依次为:图像1、图像2、图像3、图像4、图像5、图像6、图像7、图像8和图像9,在图像5为基准图像时,在需要获取5帧图像时,则获取图像6、图像7、图像3和图像4。
可以理解地,上述示例仅是为了更好地理解本公开实施例的技术方案而列举的示例,不作为对本公开实施例的唯一限制。
模型建立算法是一种基于各种收集到的无序图片进行三维重建的离线算法,在得到图像序列中的基准图像和若干连续图像输入模型建立算法。
在将基准图像和若干帧连续图像输入模型建立算法之后,执行步骤209。
步骤209:采用所述模型建立算法建立与所述基准图像关联的三维模型。
在将基准图像和若干帧连续图像输入模型建立算法之后,可以由模型建立算法根据基准图像和若干帧连续图像建立与基准图像关联的三维模型。
在本公开中,模型建立算法可以为SFM算法,以下针对采用SFM算法建立三维模型的过程进行详细描述。
首先从基准图像和若干帧连续图像中提取焦距信息(之后初始化BA需要),然后利用SIFT等特征提取算法去提取图像特征,用kd-tree模型去计算两张图片特征点之间的欧式距离进行特征点的匹配,从而找到特征点匹配个数达到要求的图像对。对于每一个图像匹配对,计算对极几何,估计F矩 阵并通过ransac算法优化改善匹配对。如果有特征点可以在这样的匹配对中链式地传递下去,一直被检测到,那么就可以形成轨迹。之后进入structure-from-motion部分,关键的第一步就是选择两幅图像对去初始化整个BA过程。首先对初始化选择的两幅图像进行第一次BA,然后循环添加新的图像进行新的BA,最后直到没有可以继续添加的合适的图片,BA结束,从而人可以得到相机估计参数和场景几何信息,即稀疏的3D点云,也即三维模型。
在采用模型建立算法建立与基准图像关联的三维模型之后,执行步骤210。
步骤210:获取由所述模型建立算法输出的所述拍摄对象的角点坐标。
在建立与基准图像关联的三维模型之后,可以由模型建立算法根据建立的三维模型输出拍摄对象的角点坐标。
系统可以在接收到由模型建立算法输出的拍摄对象的角点坐标之后,执行步骤211。
步骤211:依据所述角点坐标计算所述拍摄对象的长宽比。
长宽比是指拍摄对象的长度和宽度的比值,也即拍摄对象的真实长宽比例。
在获取拍摄对象的角点坐标之后,可以根据拍摄对象的角点坐标计算出拍摄对象的真实长宽比,例如,拍摄对象的四个角的坐标为:(0,0,0)、(3,2,0)、(0,2,0)和(3,0,0),由此可以得知,拍摄对象的长度为3,宽度为2,拍摄对象的长宽比即为3:2。
可以理解地,上述示例仅是为了更好地理解本公开实施例的技术方案而列举的示例,不作为对本公开实施例的唯一限制。
在依据角点坐标计算出拍摄对象真实的长宽比之后,执行步骤212。
步骤212:依据所述长宽比对所述基准图像进行投影校正,获取所述拍摄对象对应的校正图像。
投影校正是指采用拍摄对象的真实长宽比例对基准图像的长宽比例进行校正。
在得到拍摄对象的真实长宽比之后,可以根据拍摄对象的真实长宽比对基准图像进行投影校正,从而对基准图像的长宽比进行调整,以使基准图像 的长宽比与拍摄对象的长宽比一致,进而可以避免得到的图像存在畸变的情况。
在得到校正图像之后,可以将得到的校正图像在用户终端的显示界面内进行展示,以为用户提供一张无畸变的校准图像,可以供用户使用,如证件校验、信息识别等等。
本公开实施例通过拍摄对象的真实长宽比对基准图像进行投影校正,从而可以避免最终得到的图像存在畸变的问题。
本公开实施例提供的图像处理方法,除了具备上述实施例一提供的图像处理方法所具备的有益效果外,还可以采用多图像质量参数融合的方式获取基准图像,相较于采用单一评判标准,可以更准确的捕获最优帧图像。
实施例三
参照图3,示出了本公开实施例提供的一种图像处理装置的结构示意图,该图像处理装置具体可以包括如下模块:
图像序列采集模块310,用于采集拍摄对象对应的图像序列;
基准图像获取模块320,用于获取所述图像序列中的基准图像;
三维模型建立模块330,用于依据所述基准图像和所述图像序列中与所述基准图像连续的若干帧连续图像,建立三维模型;
长宽比确定模块340,用于依据所述三维模型,确定所述拍摄对象的长宽比;
校正图像获取模块350,用于依据所述长宽比对所述基准图像进行投影校正,获取所述拍摄对象对应的校正图像。
本公开实施例提供的图像处理装置,通过采集拍摄对象对应的图像序列,获取图像序列中的基准图像,依据基准图像和图像序列中与基准图像连续的若干帧连续图像,建立三维模型,依据三维模型,确定拍摄对象的长宽比,并依据长宽比对基准图像进行投影校正,获取拍摄对象对应的校正图像。本公开实施例可以根据拍摄对象的真实长宽比对基准图像进行投影校正,从而可以准确校正基准图像真实长宽比,能够避免图像产生畸变的情况。
实施例四
参照图4,示出了本公开实施例提供的一种图像处理装置的结构示意图,该图像处理装置具体可以包括如下模块:
图像序列采集模块410,用于采集拍摄对象对应的图像序列;
基准图像获取模块420,用于获取所述图像序列中的基准图像;
三维模型建立模块430,用于依据所述基准图像和所述图像序列中与所述基准图像连续的若干帧连续图像,建立三维模型;
长宽比确定模块440,用于依据所述三维模型,确定所述拍摄对象的长宽比;
校正图像获取模块450,用于依据所述长宽比对所述基准图像进行投影校正,获取所述拍摄对象对应的校正图像。
在本公开的一种具体实现中,所述图像序列采集模块410包括:
预设区域拍摄子模块411,用于采用预置摄像头,按照第一拍摄参数对预设区域进行拍摄;
拍摄参数生成子模块412,用于在所述摄像头的拍摄区域内检测到所述拍摄对象的情况下,对所述第一拍摄参数进行调整,生成第二拍摄参数;
图像序列采集子模块413,用于按照第二拍摄参数采集所述拍摄对象的图像序列。
在本公开的一种具体实现中,所述基准图像获取模块420包括:
图像质量参数获取子模块421,用于获取与所述图像序列中的各图像关联的图像质量参数;
图像质量参数输入子模块422,用于将各所述图像质量参数输入预先训练好的逻辑回归模型;
图像评分值接收子模块423,用于接收由所述逻辑回归模型输出的各所述图像对应的评分值;
基准图像获取子模块424,用于基于各所述评分值,获取所述图像序列中的基准图像。
在本公开的一种具体实现中,所述基准图像获取子模块424包括:
评分值获取子模块,用于获取所述图像序列中第一帧图像对应的第一评分值,及所述图像序列中除所述第一帧图像之外的其它帧图像对应的第二评分值;
评分值比较子模块,用于将所述第一评分值与第一评分阈值进行比较,及各所述第二评分值与所述第二评分阈值进行比较;其中,所述第一评分阈 值大于所述第二评分阈值;
偏移值计算子模块,用于在所述第一评分值大于所述第一评分阈值,且各所述第二评分值均大于所述第二评分阈值的情况下,计算所述图像序列中相连两帧图像的角点偏移均值;
基准图像选取子模块,用于在各所述角点偏移均值的绝对值小于预设偏移阈值的情况下,获取所述图像序列中评分值最大的图像,并将所述评分值最大的图像作为所述基准图像。
在本公开的一种具体实现中,所述三维模型建立模块430包括:
基准图像输入子模块431,用于将所述基准图像和所述若干帧连续图像输入模型建立算法;
三维模型建立子模块432,用于采用所述模型建立算法建立与所述基准图像关联的三维模型。
在本公开的一种具体实现中,所述长宽比确定模块440包括:
角点坐标获取子模块441,用于获取由所述模型建立算法输出的所述拍摄对象的角点坐标;
长宽比计算子模块442,用于依据所述角点坐标计算所述拍摄对象的长宽比。
在本公开的一种具体实现中,所述图像质量参数包括清晰度、边缘保持系数、对比度噪声比和角点偏移均值中的至少一种。
在本公开的一种具体实现中,所述装置还包括:校正图像展示模块,用于将所述校正图像在用户终端处展示。
本公开实施例提供的图像处理装置,除了具备上述实施例三提供的图像处理装置所具备的有益效果外,还可以采用多图像质量参数融合的方式获取基准图像,相较于采用单一评判标准,可以更准确的捕获最优帧图像。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的计算处理设备中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
例如,图5示出了可以实现根据本发明的方法的计算处理设备。该计算处理设备传统上包括处理器1010和以存储器1020形式的计算机程序产品或者计算机可读介质。存储器1020可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器1020具有用于执行上述方法中的任何方法步骤的程序代码1031的存储空间1030。例如,用于程序代码的存储空间1030可以包括分别用于实现上面的方法中的各种步骤的各个程序代码1031。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为如参考图Y6所述的便携式或者固定存储单元。该存储单元可以具有与图5的计算处理设备中的存储器1020类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括计算机可读代码1031’,即可以由例如诸如1010之类的处理器读取的代码,这些代码当由计算处理设备运行时,导致该计算处理设备执行上面所描述的方法中的各个步骤。
本文中所称的“一个实施例”、“实施例”或者“一个或者多个实施例”意味着,结合实施例描述的特定特征、结构或者特性包括在本发明的至少一个实施例中。此外,请注意,这里“在一个实施例中”的词语例子不一定全指同一个实施例。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下被实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (11)

  1. 一种图像处理方法,包括:
    从视频流中采集拍摄对象对应的图像序列;
    获取所述图像序列中的基准图像;
    依据所述基准图像和所述图像序列中与所述基准图像连续的若干帧连续图像,建立三维模型;
    依据所述三维模型,确定所述拍摄对象的长宽比;
    依据所述长宽比对所述基准图像进行投影校正,获取所述拍摄对象对应的校正图像。
  2. 根据权利要求1所述的方法,所述采集拍摄对象对应的图像序列的步骤,包括:
    采用预置摄像头,按照第一拍摄参数对预设区域进行拍摄;
    在所述摄像头的拍摄区域内检测到所述拍摄对象的情况下,对所述第一拍摄参数进行调整,生成第二拍摄参数;
    按照第二拍摄参数采集所述拍摄对象的图像序列。
  3. 根据权利要求1所述的方法,所述获取所述图像序列中的基准图像的步骤,包括:
    获取与所述图像序列中的各图像关联的图像质量参数;
    将各所述图像质量参数输入预先训练好的逻辑回归模型;
    接收由所述逻辑回归模型输出的各所述图像对应的评分值;
    基于各所述评分值,获取所述图像序列中的基准图像。
  4. 根据权利要求3所述的方法,所述基于各所述评分值,获取所述图像序列中的基准图像的步骤,包括:
    获取所述图像序列中第一帧图像对应的第一评分值,及所述图像序列中除所述第一帧图像之外的其它帧图像对应的第二评分值;
    将所述第一评分值与第一评分阈值进行比较,及各所述第二评分值与所述第二评分阈值进行比较;其中,所述第一评分阈值大于所述第二评分阈值;
    在所述第一评分值大于所述第一评分阈值,且各所述第二评分值均大于所述第二评分阈值的情况下,计算所述图像序列中相连两帧图像的角点偏移均值;
    在各所述角点偏移均值的绝对值小于预设偏移阈值的情况下,获 取所述图像序列中评分值最大的图像,并将所述评分值最大的图像作为所述基准图像。
  5. 根据权利要求1所述的方法,所述依据所述基准图像和所述图像序列中与所述基准图像连续的若干帧连续图像,建立三维模型的步骤,包括:
    将所述基准图像和所述若干帧连续图像输入模型建立算法;
    采用所述模型建立算法建立与所述基准图像关联的三维模型。
  6. 根据权利要求5所述的方法,所述依据所述三维模型,确定所述拍摄对象的长宽比的步骤,包括:
    获取由所述模型建立算法输出的所述拍摄对象的角点坐标;
    依据所述角点坐标计算所述拍摄对象的长宽比。
  7. 根据权利要求3所述的方法,所述图像质量参数包括清晰度、边缘保持系数、对比度噪声比和角点偏移均值中的至少一种。
  8. 一种图像处理装置,包括:
    图像序列采集模块,用于采集拍摄对象对应的图像序列;
    基准图像获取模块,用于获取所述图像序列中的基准图像;
    三维模型建立模块,用于依据所述基准图像和所述图像序列中与所述基准图像连续的若干帧连续图像,建立三维模型;
    长宽比确定模块,用于依据所述三维模型,确定所述拍摄对象的长宽比;
    校正图像获取模块,用于依据所述长宽比对所述基准图像进行投影校正,获取所述拍摄对象对应的校正图像。
  9. 一种计算处理设备,其特征在于,包括:
    存储器,其中存储有计算机可读代码;
    一个或多个处理器,当所述计算机可读代码被所述一个或多个处理器执行时,所述计算处理设备执行如权利要求1-7中任一项所述的图像处理方法。
  10. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在计算处理设备上运行时,导致所述计算处理设备执行根据权利要求1-7中任一项所述的图像处理方法。
  11. 一种计算机可读介质,其中存储了如权利要求10所述的计算机程序。
PCT/CN2020/089298 2019-07-16 2020-05-09 图像处理 WO2021008205A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910642545.5 2019-07-16
CN201910642545.5A CN110505398B (zh) 2019-07-16 2019-07-16 一种图像处理方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021008205A1 true WO2021008205A1 (zh) 2021-01-21

Family

ID=68585507

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/089298 WO2021008205A1 (zh) 2019-07-16 2020-05-09 图像处理

Country Status (2)

Country Link
CN (1) CN110505398B (zh)
WO (1) WO2021008205A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177971A (zh) * 2021-05-07 2021-07-27 中德(珠海)人工智能研究院有限公司 一种视觉跟踪方法、装置、计算机设备及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110505398B (zh) * 2019-07-16 2021-03-02 北京三快在线科技有限公司 一种图像处理方法、装置、电子设备及存储介质
CN111147694B (zh) * 2019-12-30 2022-03-22 Oppo广东移动通信有限公司 拍摄方法、拍摄装置、终端设备及计算机可读存储介质
CN113934495B (zh) * 2021-10-14 2024-05-24 北京自如信息科技有限公司 一种移动端图像环视方法、系统和移动设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030142884A1 (en) * 2002-01-31 2003-07-31 Cariffe Alan Eddy Binding curvature correction
CN1471055A (zh) * 2002-07-02 2004-01-28 ��ʿͨ��ʽ���� 图像失真校正方法和设备
CN102592124A (zh) * 2011-01-13 2012-07-18 汉王科技股份有限公司 文本图像的几何校正方法、装置和双目立体视觉系统
CN103426190A (zh) * 2013-07-23 2013-12-04 北京航空航天大学 图像重构的方法及系统
CN108198230A (zh) * 2018-02-05 2018-06-22 西北农林科技大学 一种基于散乱图像的作物果实三维点云提取系统
CN110505398A (zh) * 2019-07-16 2019-11-26 北京三快在线科技有限公司 一种图像处理方法、装置、电子设备及存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4006601B2 (ja) * 2004-03-29 2007-11-14 セイコーエプソン株式会社 画像処理システム、プロジェクタ、プログラム、情報記憶媒体および画像処理方法
CN1937698A (zh) * 2006-10-19 2007-03-28 上海交通大学 图像畸变自动校正的图像处理方法
CN101729918A (zh) * 2009-10-30 2010-06-09 无锡景象数字技术有限公司 一种实现双目立体图像校正和显示优化的方法
EP2834973A1 (en) * 2012-04-04 2015-02-11 Naxos Finance SA System for generating and receiving a stereoscopic-2d backward compatible video stream, and method thereof
JP2014179698A (ja) * 2013-03-13 2014-09-25 Ricoh Co Ltd プロジェクタ及びプロジェクタの制御方法、並びに、その制御方法のプログラム及びそのプログラムを記録した記録媒体
CN106991649A (zh) * 2016-01-20 2017-07-28 富士通株式会社 对摄像装置所捕获的文档图像进行校正的方法和装置
CN108200360A (zh) * 2018-01-12 2018-06-22 深圳市粒视界科技有限公司 一种多鱼眼镜头全景摄像机的实时视频拼接方法
CN108898591A (zh) * 2018-06-22 2018-11-27 北京小米移动软件有限公司 图像质量的评分方法及装置、电子设备、可读存储介质
CN109754461A (zh) * 2018-12-29 2019-05-14 深圳云天励飞技术有限公司 图像处理方法及相关产品

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030142884A1 (en) * 2002-01-31 2003-07-31 Cariffe Alan Eddy Binding curvature correction
CN1471055A (zh) * 2002-07-02 2004-01-28 ��ʿͨ��ʽ���� 图像失真校正方法和设备
CN102592124A (zh) * 2011-01-13 2012-07-18 汉王科技股份有限公司 文本图像的几何校正方法、装置和双目立体视觉系统
CN103426190A (zh) * 2013-07-23 2013-12-04 北京航空航天大学 图像重构的方法及系统
CN108198230A (zh) * 2018-02-05 2018-06-22 西北农林科技大学 一种基于散乱图像的作物果实三维点云提取系统
CN110505398A (zh) * 2019-07-16 2019-11-26 北京三快在线科技有限公司 一种图像处理方法、装置、电子设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177971A (zh) * 2021-05-07 2021-07-27 中德(珠海)人工智能研究院有限公司 一种视觉跟踪方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN110505398A (zh) 2019-11-26
CN110505398B (zh) 2021-03-02

Similar Documents

Publication Publication Date Title
WO2021008205A1 (zh) 图像处理
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
TW201911130A (zh) 一種翻拍影像識別方法及裝置
WO2018176938A1 (zh) 红外光斑中心点提取方法、装置和电子设备
JP6961797B2 (ja) プレビュー写真をぼかすための方法および装置ならびにストレージ媒体
WO2019100608A1 (zh) 摄像装置、人脸识别的方法、系统及计算机可读存储介质
JP2020523665A (ja) 生体検出方法及び装置、電子機器並びに記憶媒体
JPWO2018047687A1 (ja) 三次元モデル生成装置及び三次元モデル生成方法
US20130169760A1 (en) Image Enhancement Methods And Systems
JP2004192378A (ja) 顔画像処理装置およびその方法
CN110189269B (zh) 用于广角镜头3d畸变的矫正方法、装置、终端及存储介质
CN111008935B (zh) 一种人脸图像增强方法、装置、系统及存储介质
US20150117787A1 (en) Automatic rectification of distortions in images
WO2022160857A1 (zh) 图像处理方法及装置、计算机可读存储介质和电子设备
WO2023169281A1 (zh) 图像配准方法、装置、存储介质及电子设备
CN111046845A (zh) 活体检测方法、装置及系统
CN110245549A (zh) 实时面部和对象操纵
CN113822927B (zh) 一种适用弱质量图像的人脸检测方法、装置、介质及设备
WO2015196681A1 (zh) 一种图片处理方法及电子设备
CN109729231A (zh) 一种文件扫描方法、装置及设备
JP7047848B2 (ja) 顔三次元形状推定装置、顔三次元形状推定方法、及び、顔三次元形状推定プログラム
CN116958795A (zh) 翻拍图像的识别方法、装置、电子设备及存储介质
US10282633B2 (en) Cross-asset media analysis and processing
CN111767845B (zh) 证件识别方法及装置
WO2016197788A1 (zh) 拍照方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20839955

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20839955

Country of ref document: EP

Kind code of ref document: A1