WO2022087846A1 - Procédé et appareil de traitement d'image, dispositif, et support de stockage - Google Patents

Procédé et appareil de traitement d'image, dispositif, et support de stockage Download PDF

Info

Publication number
WO2022087846A1
WO2022087846A1 PCT/CN2020/124113 CN2020124113W WO2022087846A1 WO 2022087846 A1 WO2022087846 A1 WO 2022087846A1 CN 2020124113 W CN2020124113 W CN 2020124113W WO 2022087846 A1 WO2022087846 A1 WO 2022087846A1
Authority
WO
WIPO (PCT)
Prior art keywords
quadrilateral
feature
image
line segments
coordinate information
Prior art date
Application number
PCT/CN2020/124113
Other languages
English (en)
Chinese (zh)
Inventor
顾磊
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN202080103272.8A priority Critical patent/CN115885314A/zh
Priority to PCT/CN2020/124113 priority patent/WO2022087846A1/fr
Publication of WO2022087846A1 publication Critical patent/WO2022087846A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/543Depth or shape recovery from line drawings

Definitions

  • the embodiments of the present application relate to the technical field of image processing, and more particularly, to an image processing method, apparatus, device, and storage medium.
  • the tilted view effect of the document-type objects in the image is caused by the tilt of the electronic device during shooting.
  • the image is often corrected by identifying the target quadrilateral in the image and according to the coordinate information of the target quadrilateral.
  • the existing technology cannot accurately identify the target quadrilateral, resulting in a poor effect of correcting the captured image.
  • Embodiments of the present application provide an image processing method, apparatus, device, and storage medium.
  • an image processing method including:
  • the reliability of the quadrilateral is calculated through the operation of the Gaussian function.
  • the reliability is used to represent the evaluation result of the quadrilateral as the target area, and the target area is used to represent the target object in the image. form;
  • At least one target quad is determined.
  • an image processing apparatus including:
  • an image acquisition unit for acquiring an image to be identified, the image including a target object
  • an image recognition unit used for recognizing at least one quadrilateral in the image to obtain coordinate information of each quadrilateral
  • the image processing unit is used to calculate the reliability of the quadrilateral according to the coordinate information of the quadrilateral through Gaussian operation for each quadrilateral.
  • the reliability is used to characterize the evaluation result of the quadrilateral as the target area, and the target area is used to represent the target object form in the image;
  • the image processing unit is further configured to determine at least one target quadrilateral according to the reliability of each quadrilateral.
  • an electronic device including: a processor and a memory, where the memory is used for storing a computer program, and the processor is used for calling and running the computer program stored in the memory to execute the first aspect or each of its implementations. method.
  • a computer-readable storage medium for storing a computer program, and the computer program causes a computer to execute the method in the first aspect or each of its implementations.
  • a computer program product comprising computer program instructions that cause a computer to perform a method as in the first aspect or implementations thereof.
  • a computer program causing a computer to perform the method as in the first aspect or implementations thereof.
  • the coordinate information of each quadrilateral is obtained, so as to realize the preliminary screening of the quadrilateral; and based on the coordinate information of each quadrilateral, through Gaussian operation, the quadrilateral is obtained as For the credibility of the target area, by setting the preset parameters in the Gaussian function, the credibility of the quadrilateral approaching the target can be higher; and then one or more quadrilaterals with the highest credibility are selected as the target quadrilateral.
  • the embodiment of the present application can accurately identify the target quadrilateral that can characterize the shape of the target object, which provides a basis for the subsequent correction of the shape of the target object to obtain a better presentation effect.
  • FIG. 1 is a schematic flowchart of an image correction scene 100 provided by an embodiment of the present application
  • FIG. 2 is a schematic frame diagram of an electronic device 200 provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of an image processing method 300 according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a candidate graph 400 provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of an image processing method 500 according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a to-be-recognized quadrilateral 600 provided by an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of an image processing method 700 provided by an embodiment of the present application.
  • FIG. 8 shows a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 9 shows a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 10 shows a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • document class objects include, but are not limited to, documents, business cards, posters, bulletin boards, rewritable whiteboards or blackboards, and the like. Because it is difficult to control the relative shooting angle between the terminal device and the target object, the document-type object is often presented as an inclined irregular quadrilateral due to the perspective transformation in the captured image. image is corrected.
  • the quadrilaterals in the image are difficult to identify, or it is difficult to identify the number of polygons. Determine the target quadrilateral from the quadrilaterals.
  • the embodiments of the present application are applied to the above scenarios, in order to make the corrected image have a better display effect and accurately obtain the target quadrilateral in the image, with reference to FIG.
  • the image 101 includes a plurality of line segments, and by identifying these line segments, quadrilaterals in the image, such as quadrilaterals 102 and 103, are obtained, and by analyzing the reliability of each quadrilateral as a target area, and then based on the reliability of each quadrilateral It is determined that the target quadrilateral is the quadrilateral 102, and it should be understood that the target area is the boundary of the shape of the target object in the image.
  • the embodiments of the present application achieve the technical effect of correctly acquiring the target quadrilateral in the image.
  • the number of target quadrilaterals may be one to multiple. For example, when collecting images of multiple target objects at the same time, a target quadrilateral corresponding to each target object should be identified; or, for the same target object, multiple target quadrilaterals with high reliability may be determined, as an example instead of Restrictively, a final target quadrilateral may be determined based on a plurality of target quadrilaterals through algorithm screening, or a target quadrilateral selected by the user from the plurality of target quadrilaterals may be received as a final target quadrilateral.
  • a transformation matrix is obtained according to the determined at least one target quadrilateral, and perspective transformation is performed on the image according to the transformation matrix to obtain the transformed image 110 shown in FIG. 1 .
  • the regular quadrilateral 112 is the effect after the quadrilateral 102 is transformed, and the target object is displayed in the image 110 in front after the quadrilateral 112 is transformed, which has a better display effect.
  • the technical solutions of the embodiments of the present application can be applied to various electronic devices to implement at least one of verification, optimization, and testing of language algorithm models.
  • the electronic device may be a terminal device, such as a mobile phone (Mobile Phone), a tablet computer (Pad), a computer, a virtual reality (Virtual Reality, VR) terminal device, an augmented reality (Augmented Reality, AR) terminal device, an industrial control ( terminal equipment in industrial control, terminal equipment in self driving, terminal equipment in remote medical, terminal equipment in smart city or terminal equipment in smart home terminal equipment, etc.
  • the terminal device in this embodiment of the present application may also be a wearable device, and a wearable device may also be called a wearable smart device, which is a general term for intelligently designing everyday wearable devices and developing wearable devices by applying wearable technology, such as Glasses, gloves, watches, clothing and shoes, etc.
  • a wearable device is a portable device that is worn directly on the body or integrated into the user's clothing or accessories. Terminal devices can be stationary or mobile.
  • the electronic device in this embodiment of the present application may also be a server.
  • the electronic device can receive an image collected by the terminal device, and determine a target quadrilateral in the image.
  • FIG. 2 is a schematic structural diagram of an electronic device 200 according to an embodiment of the present application.
  • the electronic device 200 includes: an image acquisition unit 210 , an image recognition unit 220 and an image processing unit 230 , and the image processing unit 230 includes at least a reliability calculation subunit 231 .
  • the image acquisition unit 210 is used for acquiring the image to be recognized, and the image should contain a document class object.
  • an image collected by an image collection device, or an image transmitted by other devices, or an image input by a user may be received, which is not limited in this embodiment of the present application.
  • the image recognition unit 220 receives the image to be recognized sent by the image acquisition unit 210, and recognizes at least one quadrilateral in the image. Combining with FIG. 1, the quadrilateral 102 and the quadrilateral 103 can be obtained. Sent to the image processing unit 230.
  • the image processing unit 230 receives at least one quadrilateral sent by the image recognition unit 220, and determines the reliability of each quadrilateral through the reliability calculation subunit 231. Taking the example shown in FIG. 1, the reliability of the quadrilateral 102 is higher than that of the quadrilateral. 103 credibility.
  • the image processing unit 230 determines a quadrilateral with the highest reliability as the target quadrilateral, or after determining the reliability ranking, selects n quadrilaterals from the highest reliability as the target quadrilateral. The target quad.
  • the image processing unit 230 performs subsequent operations according to the determined at least one target quadrilateral.
  • at least one target quadrilateral is sent to the image display unit (not shown in the figure) for display, and the user can determine the recognition effect of the quadrilateral recognition by the electronic device according to the displayed at least one target quadrilateral, or make the user through the human-computer interaction interface Selecting a target quadrilateral as the final target quadrilateral; or, determining a transformation matrix according to at least one target quadrilateral, and performing perspective transformation on the image according to the transformation matrix.
  • the image The processing unit 230 selects a final target quadrilateral from the two or more target quadrilaterals, or obtains the final target quadrilateral by performing a weighted average on the two or more target quadrilaterals.
  • FIG. 3 is a schematic flowchart of an image processing method 300 according to an embodiment of the present application.
  • the embodiment of the present application confirms the reliability of the quadrilateral as the target area, and obtains the target quadrilateral according to the reliability.
  • the image processing method includes:
  • the electronic device may capture an image of the target object through an image capturing device to obtain an image to be recognized, or may receive an image input by a user, or an image sent by other devices.
  • the image may be preprocessed.
  • RGB images that is, color images formed by changing the three color channels of red R (Red), green G (Green), and blue B (Blue) and superimposing each other.
  • RGB images can be converted into color space in advance, such as converting to grayscale images or HSI images. ) and the brightness I (Intensity) to characterize the image.
  • the preprocessing of the image to be identified further includes: performing edge detection on the image by any algorithm, such as the Canny algorithm and the Holistically-Nested Edge Detection (HED) algorithm.
  • edge detection on the image by any algorithm, such as the Canny algorithm and the Holistically-Nested Edge Detection (HED) algorithm.
  • HED Holistically-Nested Edge Detection
  • the preprocessing of the image to be recognized further includes: scaling the image to a preset size.
  • S302 Identify at least one quadrilateral in the image to obtain coordinate information of each quadrilateral.
  • the coordinate information of the quadrilateral includes coordinates of four vertices of the quadrilateral.
  • identifying a quadrilateral in an image first identify multiple line segments in the image, combine every four line segments in the multiple line segments into a candidate graphic, obtain all possible candidate graphics, and then analyze each candidate graphic. Whether the figure is a quadrilateral to be identified.
  • FIG. 4 is a schematic diagram of an image to be recognized 400 according to an embodiment of the present application. Exemplarily, this embodiment determines whether the candidate graphic satisfies the first preset condition, and when the candidate graphic satisfies the first preset condition, the candidate graphic is the quadrilateral to be identified, and specifically includes the following possible implementations:
  • the first preset condition is that the two first included angles of the candidate graphics are both smaller than the first preset value, and the first included angle is the included angle of any two non-adjacent line segments among the four line segments.
  • the candidate graphics is the quadrilateral to be identified. It should be understood that the first included angle is the included angle of the two included angles of the two line segments that is less than 90 degrees.
  • the candidate graphic is a candidate graphic composed of line segments AB, BC, CD and AD
  • the first preset value is 30 degrees.
  • the first preset condition is that the four second included angles of the candidate graphics are all greater than the second preset value, and the second included angle is the included angle of any two adjacent line segments among the four line segments.
  • the first preset condition may be that the four second included angles of the candidate graphics are all smaller than the difference between 180 degrees and the second preset value.
  • the candidate graph is a candidate graph composed of line segments AB, BC, CD and AD
  • the second preset value is 60 degrees, and it is determined whether ⁇ A, ⁇ B, ⁇ C, and ⁇ D are all greater than 60 degrees, that is, whether ⁇ A, ⁇ B, ⁇ C, and ⁇ D are all less than 120 degrees.
  • the candidate figure is the quadrilateral to be identified.
  • the first preset condition is that the area ratio of the candidate graphic is greater than the fourth preset value, and the area ratio is the ratio of the area of the candidate graphic to the area of the image.
  • the fourth preset value is one sixth.
  • the first preset condition may also be any two or a combination of the above three examples.
  • the candidate graphics when the two first included angles of the candidate graphics are both smaller than the first preset value, and the four second included angles of the candidate graphics are all greater than the second preset value, the candidate graphics is the quadrilateral to be identified; or the candidate graphics When the two first included angles of the candidate graphics are less than the first preset value, and the four second included angles of the candidate graphics are all greater than the second preset value, and the area ratio of the candidate graphics is greater than the fourth preset value, the candidate graphics is the quadrilateral to be identified.
  • a candidate graph composed of four unclosed line segments such as the candidate graph in the upper left corner of the quadrilateral ABCD in Figure 4
  • determine the distance between the endpoints of the two unclosed line segments (the endpoint V and the endpoint W in Figure 4) Whether it is less than the preset distance, when the distance between the endpoints of the two line segments is less than the preset distance, determine whether the candidate graphic is a quadrilateral to be identified according to any of the above-mentioned embodiments, otherwise it is determined that the candidate graphic is not a quadrilateral to be identified .
  • the coordinates of the four vertices of the quadrilateral that is, the coordinates of the four points A, B, C, and D in FIG. 4 are determined.
  • the quadrilateral obtained after screening through any of the above examples is closer to the target quadrilateral to be obtained, which reduces the scope for subsequent determination of the target quadrilateral and improves the processing efficiency.
  • the identified line segments may be optimized.
  • the at least two line segments on the same straight line are merged.
  • the line segments on the same straight line that satisfy the above three possible positional relationships are merged.
  • Line segments with four preset values can be merged by translating one line segment to connect with another line segment, or supplementing the position between the line segments.
  • the credibility is used to characterize the evaluation result of the quadrilateral as the target area
  • the target area is used to characterize the shape of the target object in the image.
  • an evaluation is performed based on whether the quadrilateral can be used as the target area, so as to obtain the credibility of the quadrilateral.
  • at least one feature of the quadrilateral can be determined according to the coordinate information of the quadrilateral, and based on the at least one feature of the quadrilateral, the score of the quadrilateral as the target area is determined by Gaussian operation, that is, the credibility of the quadrilateral is obtained.
  • the feature of the quadrilateral includes at least one of an area ratio feature, a first vertical angle feature, a second vertical angle feature, an adjacent angle feature, or a middle position feature.
  • the area ratio feature is used to characterize the ratio of the area of the quadrilateral to the image area
  • the first vertical angle feature is used to characterize the angle between the midline of the quadrilateral and the vertical line of the image
  • the second vertical angle feature is used to characterize the midline of the quadrilateral
  • the adjacent angle feature is used to characterize the angular relationship of the adjacent corners of the quadrilateral
  • the middle position feature is used to characterize the coordinates of the middle position of the quadrilateral.
  • the Gaussian function used in the Gaussian operation includes three preset parameters: weight, average and variance.
  • the average value is used to represent the expected eigenvalue of the corresponding feature, and the closer the eigenvalue is to the average value, the higher the calculated reliability. For example, when the ratio of the quadrilateral area to the image area is 0.8, which is most likely the target area, the average value is preset to 0.8.
  • S304 Determine at least one target quadrilateral according to the reliability of each quadrilateral.
  • the at least one identified quadrilateral is screened according to the reliability of the quadrilateral to obtain at least one target quadrilateral.
  • Possible implementation manner 1 Sort at least one quadrilateral according to the reliability of each quadrilateral, and use the n quadrilaterals with the highest reliability as at least one target quadrilateral, where n ⁇ 1, it should be understood that n
  • the credibility of the quads can be the same or different, and the credibility of n quads is higher than the credibility of other quads that are not selected as target quads.
  • Possible implementation manner 2 Determine whether the reliability of each quadrilateral in the at least one quadrilateral is greater than a preset threshold, and determine the quadrilateral whose reliability is greater than the preset threshold as the target quadrilateral.
  • the coordinate information of each quadrilateral is obtained, so as to realize the preliminary screening of the quadrilateral; and based on the coordinate information of each quadrilateral, through Gaussian operation, the quadrilateral is obtained as the target area
  • the reliability of the quadrilateral that is close to the target quadrilateral can be higher; and then one or more quadrilaterals with the highest reliability are selected as the target quadrilateral.
  • the embodiment of the present application can accurately identify the target quadrilateral that can represent the shape of the target object, which provides a basis for the subsequent correction of the shape of the target object to obtain a better presentation effect.
  • the embodiment of the present application may perform reliability evaluation for each feature of the quadrilateral, and obtain the reliability corresponding to each feature. The reliability of each feature is summed to obtain the reliability of the quadrilateral.
  • FIG. 5 is a schematic flowchart of an image processing method 500 according to an embodiment of the present application.
  • the reliability corresponding to each feature is obtained respectively.
  • the reliability g corresponding to each feature is summed to obtain the reliability c of the quadrilateral.
  • the preset parameter corresponding to the area ratio feature in this embodiment is called the first preset parameter
  • the preset parameter corresponding to the first vertical angle feature is called the second preset parameter
  • the preset parameter corresponding to the first vertical angle feature is called the second preset parameter
  • the preset parameter corresponding to the adjacent angle feature is called the third preset parameter
  • the preset parameter corresponding to the middle position feature is called the fourth preset parameter.
  • FIG. 6 is a schematic diagram of an image to be recognized 600 according to an embodiment of the present application.
  • the reliability g 1 corresponding to the area ratio feature wherein ⁇ 1 , ⁇ 1 and ⁇ 1 are the first preset parameters corresponding to the area ratio feature, and ⁇ 1 is used to characterize the reliability g 1 corresponding to the area ratio feature
  • the weight of , ⁇ 1 is the average value used to characterize the expected value of the area ratio, and ⁇ 1 is the variance.
  • calculate the credibility g 2 corresponding to the first vertical angle feature according to the coordinate information of the quadrilateral, determine the vectors of the two center lines of the quadrilateral, and determine the angle between the vector of each center line and the vertical line, respectively, and The smallest included angle among the two included angles is used as the eigenvalue of the first vertical angle feature.
  • the midline of the quadrilateral is the opposite side of the quadrilateral, that is, the connecting line between the midpoints of two non-adjacent sidelines
  • the vertical line is the straight line perpendicular to the horizontal sideline of the image
  • the horizontal sideline refers to the image when the image is placed in the positive direction. , the edge in the horizontal direction.
  • N is the midpoint of the line segment AB
  • M is the midpoint of the line segment CD
  • P is the midpoint of the line segment BC
  • Q is the midpoint of the line segment AD
  • the two midlines of the quadrilateral ABCD are MN and PQ respectively
  • the vector MN can be obtained by Obtained
  • the reliability g 2 corresponding to the first vertical angle feature wherein ⁇ 2 , ⁇ 2 and ⁇ 2 are the second preset parameters corresponding to the first vertical angle feature, and ⁇ 2 is used to represent the first vertical angle feature corresponding to
  • the weight of the reliability g 2 ⁇ 2 is the average value used to characterize the expected value of the first vertical angle, and ⁇ 2 is the variance.
  • the reliability g 3 corresponding to the adjacent angle feature wherein ⁇ 3 , ⁇ 3 and ⁇ 3 are the third preset parameters corresponding to the adjacent angle feature, and ⁇ 3 is used to represent the reliability corresponding to the vector angle feature
  • the weight of g 3 , ⁇ 3 is the average value used to characterize the expected value of adjacent angles, and ⁇ 3 is the variance.
  • calculate the reliability g 4 corresponding to the feature of the middle position according to the coordinate information of the quadrilateral, determine the coordinates of the middle position of the quadrilateral, it should be noted that the coordinates of the middle position are the average value of the coordinates of the four vertices of the quadrilateral , combined with Figure 6, the eigenvalues of the middle position feature, that is, the coordinates of the middle position of the quadrilateral
  • the reliability g 4 corresponding to the middle position feature wherein ⁇ 4 , ⁇ 4 and ⁇ 4 are the fourth preset parameters corresponding to the middle position feature, and ⁇ 4 is used to represent the reliability g 4 corresponding to the vector angle feature
  • the weight of , ⁇ 4 is the average value used to characterize the expected value of the middle position, and ⁇ 4 is the variance.
  • the calculation of the reliability in any of the foregoing embodiments may also be combined with the reliability of the second vertical angle feature.
  • the reliability g 5 corresponding to the second vertical angle feature wherein ⁇ 5 , ⁇ 5 and ⁇ 5 are preset parameters corresponding to the second vertical angle feature, also referred to as fifth preset parameters, and ⁇ 5 is used to represent The weight of the reliability g 5 corresponding to the vector angle feature, ⁇ 5 is the average value used to represent the expected value of the middle position, and ⁇ 5 is the variance.
  • FIG. 7 is a schematic flowchart of an image processing method 700 according to an embodiment of the present application.
  • the shooting scene of the image can reflect the type and characteristics of the target object to a certain extent. Then, setting different parameters for the Gaussian function for different shooting scenes can improve the accuracy of quadrilateral recognition.
  • the distance of the user viewing the painting is relatively short, so the area ratio of the captured target object in the image is larger, and the painting is generally in a hanging state, so the captured target object is generally presented in the image as Trapezoid, according to which the preset parameters corresponding to the area ratio feature and the preset parameters corresponding to the first vertical angle feature are set.
  • the shooting scene includes but is not limited to: any one of an art gallery, an office building, and a school.
  • the positioning information when the image is captured is first, and the shooting scene of the image is determined according to the positioning information, for example, the shooting scene is predicted according to the positioning information and the preset map information, and then the shooting scene is determined according to the determined shooting scene and the preset shooting scene.
  • the corresponding relationship with the parameter group determine the parameter group corresponding to the predicted shooting scene, the parameter group includes the preset parameters corresponding to one or more features in any of the above-mentioned embodiments, and then calculate the quadrilateral based on the determined parameter group. reliability.
  • the positioning information of the electronic device is obtained through any positioning technology, for example, the positioning information of the electronic device is obtained through a global positioning system (Global Positioning System, GPS).
  • a global positioning system Global Positioning System, GPS
  • the embodiment of the present application can determine the shooting scene according to the user's intention, for example, receive the shooting scene selected by the user through the human-computer interaction interface, and then determine the corresponding parameter group according to the shooting scene.
  • a setting instruction input by the user may be received, where the setting instruction includes a parameter group identifier, and the corresponding parameter group is determined according to the parameter group identifier indicated by the setting instruction.
  • FIG. 8 shows a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
  • the image processing device 10 includes:
  • an image acquisition unit 11 for acquiring an image to be identified, the image including a target object
  • the image recognition unit 12 is used to recognize at least one quadrilateral in the image, and obtain coordinate information of each quadrilateral;
  • the image processing unit 13 is used for each quadrilateral, according to the coordinate information of the quadrilateral, through Gaussian operation, calculates the credibility of the quadrilateral, the credibility is used to characterize the quadrilateral as the evaluation result of the target area, and the target area is used to represent the target. the shape of the object in the image;
  • the image processing unit 13 is further configured to determine at least one target quadrilateral according to the reliability of each quadrilateral.
  • the image processing device 10 in the embodiment of the present application includes an image acquisition unit 11 and an image recognition unit 12, and by identifying at least one quadrilateral in the image, the coordinate information of each quadrilateral is obtained, and the preliminary screening of the quadrilateral is realized; and based on The coordinate information of each quadrilateral is obtained through Gaussian operation to obtain the credibility of the quadrilateral as the target area.
  • the embodiment of the present application can accurately identify the target quadrilateral that can represent the shape of the target object, which provides a basis for the subsequent correction of the shape of the target object to obtain a better presentation effect.
  • the image processing unit 13 is specifically used for:
  • the reliability of the feature is determined according to the coordinate information of the quadrilateral and the Gaussian function corresponding to the feature, and the quadrilateral includes at least one feature;
  • the image processing unit 13 is specifically used for:
  • the reliability of the feature is obtained, and the Gaussian function includes preset parameters corresponding to the feature.
  • the image processing unit 13 is specifically used for:
  • the ratio of the area of the quadrilateral to the area of the image is taken as the eigenvalue of the feature.
  • the image processing unit 13 is specifically used for:
  • the image processing unit 13 is specifically used for:
  • the image processing unit 13 is specifically used for:
  • the coordinate information of the quadrilateral determine the coordinates of the middle position of the quadrilateral, and use the coordinates of the middle position as the feature value of the feature; the coordinates of the middle position are the average of the coordinates of the four vertices of the quadrilateral, and the coordinate information includes the coordinates of the four vertices .
  • the preset parameters include weight, mean and variance.
  • the image processing unit 13 is specifically used for:
  • Obtain pose information which is used to characterize the position and/or attitude of the electronic device when the image is collected;
  • the eigenvalues of the features are determined.
  • the image processing unit 13 is specifically used for:
  • the projected gravity vector is determined, and the projected gravity vector is used to represent the projection of the gravity vector in the world coordinate system in the coordinate system where the image is located;
  • the data processing apparatus provided in the foregoing embodiments may execute the technical solutions of the foregoing method embodiments, and the implementation principles and technical effects thereof are similar, and are not repeated here.
  • FIG. 9 shows a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
  • the image processing device 10 further includes:
  • a position obtaining unit 14 used for obtaining positioning information, the positioning information is used to represent the position of the electronic device when the image is collected;
  • the parameter determination unit 15 is used to determine the shooting scene of the image according to the positioning information
  • the parameter determination unit 15 is further configured to determine a corresponding parameter group according to the shooting scene, where the parameter group includes preset parameters corresponding to at least one feature.
  • the data processing apparatus provided in the foregoing embodiments may execute the technical solutions of the foregoing method embodiments, and the implementation principles and technical effects thereof are similar, and are not repeated here.
  • FIG. 10 shows a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
  • the image processing apparatus 10 further includes: a receiving unit 16;
  • the receiving unit 16 is configured to receive the shooting scene input by the user
  • the parameter determination unit 15 is further configured to determine a corresponding parameter group according to the shooting scene, where the parameter group includes preset parameters corresponding to at least one feature.
  • the image recognition unit 12 is specifically used for:
  • each candidate graphic when the candidate graphic meets the first preset condition, it is determined that the candidate graphic is a quadrilateral, and coordinate information of the quadrilateral is obtained.
  • the first preset condition includes at least one of the following:
  • the two first included angles of the candidate graphics are both smaller than the first preset value, and the first included angle is the included angle of any two non-adjacent line segments in the four line segments;
  • the four second included angles of the candidate graphics are all greater than the second preset value, and the second included angle is the included angle of any two adjacent line segments in the four line segments;
  • the area ratio of the candidate graphic is greater than the third preset value, and the area ratio is the ratio of the area of the candidate graphic to the area of the image.
  • the image recognition unit 12 is specifically used for:
  • the image recognition unit 12 is specifically used for:
  • the two line segments are merged.
  • the second preset condition includes:
  • either of the two line segments is covered by the other;
  • the distance between the two line segments is smaller than the fourth preset value.
  • the image processing unit 13 is specifically used for:
  • the quadrilateral is determined as the target quadrilateral.
  • the data processing apparatus provided in the foregoing embodiments may execute the technical solutions of the foregoing method embodiments, and the implementation principles and technical effects thereof are similar, and are not repeated here.
  • FIG. 11 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device shown in FIG. 11 includes a processor 1210, and the processor 1210 can call and run a computer program from a memory, so as to implement the method in this embodiment of the present application.
  • the electronic device 1200 may further include a memory 1220 .
  • the processor 1210 may call and run a computer program from the memory 1220 to implement the methods in the embodiments of the present application.
  • the memory 1220 may be a separate device independent of the processor 1210, or may be integrated in the processor 1210.
  • the electronic device 1200 may further include a transceiver 1230, and the processor 1210 may control the transceiver 1230 to communicate with other devices, specifically, may send information or data to other devices, or receive other devices Information or data sent by the device.
  • the processor 1210 may control the transceiver 1230 to communicate with other devices, specifically, may send information or data to other devices, or receive other devices Information or data sent by the device.
  • the transceiver 1230 may include a transmitter and a receiver.
  • the transceiver 1230 may further include antennas, and the number of the antennas may be one or more.
  • the electronic device 1200 may implement corresponding processes in each method of the embodiments of the present application, which are not repeated here for brevity.
  • the processor in this embodiment of the present application may be an integrated circuit chip, which has a signal processing capability.
  • each step of the above method embodiments may be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the above-mentioned processor can be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other available Programming logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the memory in this embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable ROM, PROM), an erasable programmable read-only memory (Erasable PROM, EPROM), an electrically programmable read-only memory (Erasable PROM, EPROM). Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be Random Access Memory (RAM), which acts as an external cache.
  • RAM Static RAM
  • DRAM Dynamic RAM
  • SDRAM Synchronous DRAM
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM DDR SDRAM
  • enhanced SDRAM ESDRAM
  • synchronous link dynamic random access memory Synchlink DRAM, SLDRAM
  • Direct Rambus RAM Direct Rambus RAM
  • the memory in this embodiment of the present application may also be a static random access memory (static RAM, SRAM), a dynamic random access memory (dynamic RAM, DRAM), Synchronous dynamic random access memory (synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous connection Dynamic random access memory (synch link DRAM, SLDRAM) and direct memory bus random access memory (Direct Rambus RAM, DR RAM) and so on. That is, the memory in the embodiments of the present application is intended to include but not limited to these and any other suitable types of memory.
  • Embodiments of the present application further provide a computer-readable storage medium for storing a computer program.
  • the computer-readable storage medium can be applied to the electronic device in the embodiments of the present application, and the computer program enables the computer to execute the corresponding processes implemented by the electronic device in the various methods of the embodiments of the present application.
  • the computer program enables the computer to execute the corresponding processes implemented by the electronic device in the various methods of the embodiments of the present application.
  • Embodiments of the present application also provide a computer program product, including computer program instructions.
  • the computer program product can be applied to the electronic device in the embodiments of the present application, and the computer program instructions cause the computer to execute the corresponding processes implemented by the electronic device in the various methods of the embodiments of the present application. Repeat.
  • the embodiments of the present application also provide a computer program.
  • the computer program can be applied to the electronic device in the embodiments of the present application.
  • the computer program runs on the computer, the computer executes the corresponding processes implemented by the electronic device in the various methods of the embodiments of the present application. For the sake of brevity. , and will not be repeated here.
  • the disclosed apparatus, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the above units is only a logical function division.
  • multiple units or components may be combined or may be Integration into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described above as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above functions are implemented in the form of software functional units and sold or used as independent products, they may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Selon divers modes de réalisation, la présente demande concerne un procédé et un appareil de traitement d'image, un dispositif, et un support de stockage. Le procédé consiste : à acquérir une image à soumettre à une identification, l'image comprenant un objet cible; à identifier au moins un quadrilatère dans l'image afin d'obtenir des informations de coordonnées de chaque quadrilatère; pour chaque quadrilatère, à calculer un niveau de confiance du quadrilatère en fonction des informations de coordonnées du quadrilatère et au moyen d'une opération gaussienne, le niveau de confiance étant utilisé pour représenter un résultat d'évaluation du quadrilatère servant de zone cible, et la zone cible étant utilisée pour représenter la forme de l'objet cible dans l'image; et à déterminer au moins un quadrilatère cible en fonction du niveau de confiance de chaque quadrilatère. Un quadrilatère cible représentant la forme d'un objet cible peut être identifié avec précision, ce qui permet de fournir la base d'une correction ultérieure de la forme de l'objet cible de façon à obtenir un meilleur effet de présentation.
PCT/CN2020/124113 2020-10-27 2020-10-27 Procédé et appareil de traitement d'image, dispositif, et support de stockage WO2022087846A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080103272.8A CN115885314A (zh) 2020-10-27 2020-10-27 图像的处理方法、装置、设备以及存储介质
PCT/CN2020/124113 WO2022087846A1 (fr) 2020-10-27 2020-10-27 Procédé et appareil de traitement d'image, dispositif, et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/124113 WO2022087846A1 (fr) 2020-10-27 2020-10-27 Procédé et appareil de traitement d'image, dispositif, et support de stockage

Publications (1)

Publication Number Publication Date
WO2022087846A1 true WO2022087846A1 (fr) 2022-05-05

Family

ID=81381637

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/124113 WO2022087846A1 (fr) 2020-10-27 2020-10-27 Procédé et appareil de traitement d'image, dispositif, et support de stockage

Country Status (2)

Country Link
CN (1) CN115885314A (fr)
WO (1) WO2022087846A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152382A (zh) * 2023-03-06 2023-05-23 清华大学 结构平面布置图的数字化表征转化方法及装置、电子设备
CN117274366A (zh) * 2023-11-22 2023-12-22 合肥晶合集成电路股份有限公司 线条边距确定方法和装置
CN118368408A (zh) * 2024-06-20 2024-07-19 浙江深象智能科技有限公司 图像采集设备的检测方法、设备和存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005122328A (ja) * 2003-10-14 2005-05-12 Casio Comput Co Ltd 撮影装置、その画像処理方法及びプログラム
US7065261B1 (en) * 1999-03-23 2006-06-20 Minolta Co., Ltd. Image processing device and image processing method for correction of image distortion
CN101248454A (zh) * 2005-08-25 2008-08-20 株式会社理光 图像处理方法和设备、数字照相机以及记录图像处理程序的记录介质
CN106780964A (zh) * 2016-12-06 2017-05-31 深圳怡化电脑股份有限公司 一种矫正纸币图像的方法及装置
CN108665495A (zh) * 2017-03-30 2018-10-16 展讯通信(上海)有限公司 图像处理方法及装置、移动终端
CN110136156A (zh) * 2018-02-02 2019-08-16 北京三快在线科技有限公司 一种多边形区域检测方法及装置
CN110689501A (zh) * 2019-09-29 2020-01-14 京东方科技集团股份有限公司 一种畸变校正方法、装置、电子设备及计算机可读存储介质
CN111108515A (zh) * 2019-12-27 2020-05-05 威创集团股份有限公司 图片目标点纠正方法、装置设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7672507B2 (en) * 2004-01-30 2010-03-02 Hewlett-Packard Development Company, L.P. Image processing methods and systems
US8897538B1 (en) * 2013-08-26 2014-11-25 Vertifi Software, LLC Document image capturing and processing
US9754419B2 (en) * 2014-11-16 2017-09-05 Eonite Perception Inc. Systems and methods for augmented reality preparation, processing, and application
US10650526B2 (en) * 2016-06-28 2020-05-12 Canon Kabushiki Kaisha Image processing apparatus, image capturing apparatus, image processing method, and storage medium
CN108604374B (zh) * 2016-09-22 2020-03-10 华为技术有限公司 一种图像检测方法及终端

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7065261B1 (en) * 1999-03-23 2006-06-20 Minolta Co., Ltd. Image processing device and image processing method for correction of image distortion
JP2005122328A (ja) * 2003-10-14 2005-05-12 Casio Comput Co Ltd 撮影装置、その画像処理方法及びプログラム
CN101248454A (zh) * 2005-08-25 2008-08-20 株式会社理光 图像处理方法和设备、数字照相机以及记录图像处理程序的记录介质
CN106780964A (zh) * 2016-12-06 2017-05-31 深圳怡化电脑股份有限公司 一种矫正纸币图像的方法及装置
CN108665495A (zh) * 2017-03-30 2018-10-16 展讯通信(上海)有限公司 图像处理方法及装置、移动终端
CN110136156A (zh) * 2018-02-02 2019-08-16 北京三快在线科技有限公司 一种多边形区域检测方法及装置
CN110689501A (zh) * 2019-09-29 2020-01-14 京东方科技集团股份有限公司 一种畸变校正方法、装置、电子设备及计算机可读存储介质
CN111108515A (zh) * 2019-12-27 2020-05-05 威创集团股份有限公司 图片目标点纠正方法、装置设备及存储介质

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152382A (zh) * 2023-03-06 2023-05-23 清华大学 结构平面布置图的数字化表征转化方法及装置、电子设备
CN116152382B (zh) * 2023-03-06 2023-10-20 清华大学 结构平面布置图的数字化表征转化方法及装置、电子设备
CN117274366A (zh) * 2023-11-22 2023-12-22 合肥晶合集成电路股份有限公司 线条边距确定方法和装置
CN117274366B (zh) * 2023-11-22 2024-02-20 合肥晶合集成电路股份有限公司 线条边距确定方法和装置
CN118368408A (zh) * 2024-06-20 2024-07-19 浙江深象智能科技有限公司 图像采集设备的检测方法、设备和存储介质

Also Published As

Publication number Publication date
CN115885314A (zh) 2023-03-31

Similar Documents

Publication Publication Date Title
US11010967B2 (en) Three dimensional content generating apparatus and three dimensional content generating method thereof
WO2022087846A1 (fr) Procédé et appareil de traitement d'image, dispositif, et support de stockage
JP7564962B2 (ja) 画像処理方法、画像処理装置及び非一時的な記憶媒体
US11113819B2 (en) Graphical fiducial marker identification suitable for augmented reality, virtual reality, and robotics
CN107993216B (zh) 一种图像融合方法及其设备、存储介质、终端
US7554575B2 (en) Fast imaging system calibration
US11521311B1 (en) Collaborative disparity decomposition
JP6491517B2 (ja) 画像認識ar装置並びにその姿勢推定装置及び姿勢追跡装置
KR101165415B1 (ko) 이미지내 생체 얼굴 인식 방법 및 인식 장치
CN109937434A (zh) 图像处理方法、装置、终端和存储介质
US20240261677A1 (en) Determination method, determination apparatus of calibration information and electronic device
CN115731591A (zh) 一种化妆进度检测方法、装置、设备及存储介质
CN110852132B (zh) 一种二维码空间位置确认方法及装置
CN109816628B (zh) 人脸评价方法及相关产品
CN117173405A (zh) 图像处理方法和电子设备
CN107742316B (zh) 图像拼接点获取方法及获取装置
CN113837018B (zh) 一种化妆进度检测方法、装置、设备及存储介质
CN113837020B (zh) 一种化妆进度检测方法、装置、设备及存储介质
CN113837019B (zh) 一种化妆进度检测方法、装置、设备及存储介质
CN115514887A (zh) 视频采集的控制方法、装置、计算机设备和存储介质
CN114677439A (zh) 相机的位姿确定方法、装置、电子设备以及存储介质
US20180061135A1 (en) Image display apparatus and image display method
CN107633498A (zh) 图像暗态增强方法、装置及电子设备
WO2022172094A1 (fr) Détection de peau humaine basée sur un corps humain préalable
CN114066731A (zh) 生成全景图的方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20959010

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20959010

Country of ref document: EP

Kind code of ref document: A1