CN110414502B - Image processing method and device, electronic equipment and computer readable medium - Google Patents

Image processing method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN110414502B
CN110414502B CN201910711755.5A CN201910711755A CN110414502B CN 110414502 B CN110414502 B CN 110414502B CN 201910711755 A CN201910711755 A CN 201910711755A CN 110414502 B CN110414502 B CN 110414502B
Authority
CN
China
Prior art keywords
target object
polygon
image
determining
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910711755.5A
Other languages
Chinese (zh)
Other versions
CN110414502A (en
Inventor
王洁
刘设伟
王亚领
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taikang Online Health Technology Wuhan Co ltd
Taikang Online Property Insurance Co Ltd
Original Assignee
Taikang Insurance Group Co Ltd
Taikang Online Property Insurance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taikang Insurance Group Co Ltd, Taikang Online Property Insurance Co Ltd filed Critical Taikang Insurance Group Co Ltd
Priority to CN201910711755.5A priority Critical patent/CN110414502B/en
Publication of CN110414502A publication Critical patent/CN110414502A/en
Application granted granted Critical
Publication of CN110414502B publication Critical patent/CN110414502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image

Abstract

The present disclosure provides an image processing method, an apparatus, an electronic device, and a computer-readable medium, the method including: acquiring an image to be processed including a target object; determining a contour line of the target object and a first circumscribed polygon of the target object in the image to be processed; determining a second external polygon of the target object according to the contour line and the first external polygon; and correcting the target object according to the first external polygon and the second external polygon to obtain a standard target object image. According to the technical scheme of the embodiment of the invention, the image to be processed of the target object can be corrected according to the first external polygon and the second external polygon to obtain a standard target object image, so that the target object can be accurately positioned and identified in the following process.

Description

Image processing method and device, electronic equipment and computer readable medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable medium.
Background
The identity card, which is the most important certificate for the personal identity information certification, plays an important role in the process of identity verification of an individual.
With the progress of science and technology, the automatic identification technology is mostly adopted to collect the information of the identity card so as to facilitate the smoothness of the service flow.
In the process of extracting the information of the identity card, the vertically shot image including the identity card can be accurately positioned in the area of the identity card through rotation, translation, zooming and other operations, so that the accuracy of extracting the information of the identity card is improved. However, for the identification card image shot obliquely, if the identification card region obtained by the above method is subjected to image recognition to extract the identification card information, a large error is generated due to the presence of perspective transformation.
Therefore, in order to improve the accuracy of information extraction, an image processing method capable of accurately positioning and correcting an image needs to be found.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
In view of the above, the present disclosure provides an image processing method and apparatus, an electronic device, and a computer readable medium, which can accurately position and correct an image area of a target object.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of an embodiment of the present disclosure, an image processing method is provided, which includes: acquiring an image to be processed including a target object; determining a contour line of the target object and a first circumscribed polygon of the target object in the image to be processed; determining a second external polygon of the target object according to the contour line and the first external polygon; and correcting the target object according to the first external polygon and the second external polygon to obtain a standard target object image.
In some embodiments, said determining a second circumscribing polygon of the target object from the contour lines and the first circumscribing polygon comprises: determining a point with the closest distance for each side of the first circumscribed polygon on the contour line respectively to serve as a first fixed point; respectively determining second fixed points corresponding to the first fixed points on the contour line; determining each target straight line according to each first fixed point and the corresponding second fixed point; and determining each target vertex according to each target straight line, and determining a second external polygon of the target object according to each target vertex.
In some embodiments, the first circumscribing polygon comprises a first edge comprising a first vertex and a second vertex; wherein, confirm the second fixed point corresponding to every first fixed point separately on the said contour line, including: obtaining a first distance and a second distance from a first fixed point corresponding to the first edge to the first vertex and the second vertex; if the first distance is greater than the second distance, determining a point set formed by points, the distance between which and the first fixed point is greater than a distance threshold value and the distance between which and the first vertex is less than the first distance, from the contour line; determining a point having a smallest distance to the first edge from the set of points as the second fixed point.
In some embodiments, the first circumscribed polygon comprises a first edge; wherein, the determining each target vertex according to each target straight line includes: determining a target straight line corresponding to the first edge according to a first fixed point and a second fixed point corresponding to the first edge; extending a target straight line corresponding to the first edge to intersect with the first circumscribed polygon at a first point and a second point, wherein the first point intersects with the first edge; and determining that the second point is the target vertex corresponding to the first edge.
In some embodiments, the first circumscribing polygon is a rectangle and the second circumscribing polygon is a quadrilateral.
In some embodiments, said rectifying said target object according to said first circumscribing polygon and said second circumscribing polygon to obtain a standard target object image comprises: and based on the first external polygon and the second external polygon, correcting the target object by using a perspective transformation function to obtain a standard target object image.
In some embodiments, the determining, in the image to be processed, the contour line of the target object, and the first circumscribed polygon of the target object comprises: and determining the contour line of the target object and a first circumscribed polygon of the target object in the image to be processed based on a document layout analysis algorithm.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including: an image acquisition module configured to acquire an image to be processed including a target object; the preprocessing module is configured to determine a contour line of the target object and a first circumscribed polygon of the target object in the image to be processed; a second circumscribed polygon generation module configured to determine a second circumscribed polygon of the target object according to the contour line and the first circumscribed polygon; and the target object image acquisition module is configured to correct the target object according to the first external polygon and the second external polygon so as to obtain a standard target object image.
According to a third aspect of the embodiments of the present disclosure, an electronic device is provided, which includes: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the image processing method of any one of the above.
According to a fourth aspect of the embodiments of the present disclosure, a computer-readable medium is proposed, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the image processing method according to any one of the above.
According to the image processing method and apparatus, the electronic device, and the computer-readable medium provided by some embodiments of the present disclosure, on one hand, by acquiring the contour line and the first circumscribed polygon in the image to be processed, the second circumscribed polygon of the target object may be determined to locate the real area of the target object. On the other hand, the real area of the target object can be corrected according to the first external polygon and the second external polygon to obtain a standard target object image, and accurate positioning and correction of the target object in the image to be processed can be achieved through the method.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. The drawings described below are merely some embodiments of the present disclosure, and other drawings may be derived from those drawings by those of ordinary skill in the art without inventive effort.
Fig. 1 shows a schematic diagram of an exemplary system architecture of an image processing method or an image processing apparatus to which the embodiments of the present disclosure can be applied.
FIG. 2 is a flow diagram illustrating an image processing method according to an exemplary embodiment.
FIG. 3 is a diagram illustrating an image processing method according to an exemplary embodiment.
FIG. 4 is a flow diagram illustrating another method of image processing according to an exemplary embodiment.
Fig. 5 is a schematic diagram illustrating yet another image processing method according to an exemplary embodiment.
FIG. 6 is a schematic diagram illustrating yet another image processing method according to an exemplary embodiment.
FIG. 7 is a flow chart illustrating yet another image processing method according to an exemplary embodiment.
Fig. 8 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.
Fig. 9 is a block diagram illustrating another image processing apparatus according to an exemplary embodiment.
Fig. 10 is a block diagram illustrating still another image processing apparatus according to an exemplary embodiment.
Fig. 11 is a block diagram illustrating still another image processing apparatus according to an exemplary embodiment.
Fig. 12 is a schematic diagram illustrating a configuration of a computer system applied to an image processing apparatus according to an exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The drawings are merely schematic illustrations of the present disclosure, in which the same reference numerals denote the same or similar parts, and thus, a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and steps, nor do they necessarily have to be performed in the order described. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In this specification, the terms "a", "an", "the", "said" and "at least one" are used to indicate the presence of one or more elements/components/etc.; the terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. other than the listed elements/components/etc.; the terms "first," "second," and "third," etc. are used merely as labels, and are not limiting on the number of their objects.
The following detailed description of exemplary embodiments of the disclosure refers to the accompanying drawings.
Fig. 1 shows a schematic diagram of an exemplary system architecture of an image processing method or an image processing apparatus to which the embodiments of the present disclosure can be applied.
As shown in fig. 1, the system architecture 100 may include an image acquisition apparatus 101, terminal devices 102 and 103, a network 104, and a server 105. The network 104 is a medium to provide a communication link between the image acquisition apparatus 101, the terminal devices 102 and 103, and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The image capturing device 101 may be any device capable of capturing an image, and the image capturing device may be, for example, a video camera, a mobile phone with a camera, a computer, etc., which is not limited in this disclosure.
The user may use the terminal devices 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 102 and 103 may be various electronic devices having display screens and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server that provides various services, such as a background management server that provides support for devices operated by users using the terminal apparatuses 102, 103. The background management server can analyze and process the received data such as the request and feed back the processing result to the terminal equipment.
The server 105 may, for example, acquire an image to be processed including a target object; the server 105 may determine, for example, in the image to be processed, a contour line of the target object, and a first circumscribed polygon of the target object; server 105 may determine a second circumscribing polygon for the target object, e.g., from the contour line and the first circumscribing polygon; the server 105 may rectify the target object, for example, according to the first circumscribing polygon and the second circumscribing polygon, to obtain a standard target object image.
It should be understood that the number of image capturing devices, terminal devices, networks and servers in fig. 1 is only illustrative, and the server 105 may be a physical server or may be composed of a plurality of servers, and there may be any number of terminal devices, networks and servers according to actual needs.
FIG. 2 is a flow diagram illustrating an image processing method according to an exemplary embodiment. The method provided by the embodiment of the present disclosure may be processed by any electronic device with computing processing capability, for example, the server 105 and/or the terminal devices 102 and 103 in the embodiment of fig. 1 described above, and in the following embodiment, the server 105 is taken as an execution subject for example, but the present disclosure is not limited thereto.
Referring to fig. 2, an image processing method provided by an embodiment of the present disclosure may include the following steps.
In step S201, an image to be processed including a target object is acquired.
In some embodiments, the image to be processed including the target object may be acquired by the image acquiring apparatus 101 in the above-described embodiment of fig. 1. The image acquisition apparatus 101 may upload the acquired to-be-processed image including the target object to the server 105.
In some embodiments, the target object may include certificates such as identification cards, military and military officer certificates, student certificates, and may also include any object that can be identified by photographing such as contracts and invoices.
In some embodiments, the image to be processed may be obliquely captured, and the image to be processed may also include some background information, so that the text information and/or the image information included in the target object cannot be accurately obtained by directly performing image recognition and the like on the image to be processed.
In the present disclosure, the embodiment will be described by taking the target object as the identification card as an example, but the present disclosure is not limited thereto, and the target object may be changed according to different application scenarios.
Step S202, determining the contour line of the target object and a first circumscribed polygon of the target object in the image to be processed.
In some embodiments, it is assumed that the target object is an identity card. The image to be processed as shown in fig. 3 can be obtained if the identification card is photographed from an oblique angle.
In some embodiments, a contour line of the target object and a first circumscribed polygon of the target object may be determined in the image to be processed including the target object based on a document layout analysis algorithm.
In some embodiments, the document layout analysis algorithm may be, for example, a dhSegment algorithm. Among them, the dhSegment algorithm is a general deep learning method for document segmentation.
In some embodiments, the algorithm principle for determining the contour line of the target object using the dhSegment algorithm may be expressed as follows: performing secondary classification on each pixel position in the input image to be processed, namely marking the pixel position belonging to the target object area as 1, and otherwise, marking the pixel position as 0; then, an outline of a pixel point of a position (in the target object area) marked as 1 is extracted by using a findContours (outline search) function of an open source computer vision library (open source computer vision library) tool to obtain a corresponding outline, and then a first external polygon of the target object is determined according to the outline.
For the image to be processed including the identity card as shown in fig. 3, the findContours function of the openCV tool may be used to extract the outline of the pixel point at the position (within the target object region) marked as 1, to obtain a corresponding contour line 301, and then the minareact function of the openCV tool may be used to obtain a first circumscribed polygon 302 of the identity card region.
In some embodiments, the first circumscribing polygon of the identification card area may be a rectangle, although this disclosure does not define the shape of the first circumscribing polygon.
Step S203, determining a second external polygon of the target object according to the contour line and the first external polygon.
In some embodiments, for an obliquely captured to-be-processed image including a target object, due to the presence of perspective transformation, the first circumscribed polygon may not be a true circumscribed polygon of the target object.
For the image to be processed including the identification card as shown in fig. 3, the second circumscribed polygon 303 may be determined according to the outline 301 of the identification card region and the first circumscribed polygon 302.
In some embodiments, the second circumscribed polygon 303 determined from the outline 301 of the identification card area and the first circumscribed polygon 302 may be a quadrilateral, but the shape of the second circumscribed polygon is not limited by this disclosure.
Step S204, the target object is corrected according to the first external polygon and the second external polygon, so that a standard target object image is obtained.
In some embodiments, the target object may be rectified according to the first and second circumscribing polygons of the target object. For example, for an image to be processed including an identification card, the identification card may be rectified using a perspective transformation function based on the first circumscribing polygon 302 and the second circumscribing polygon 303 to obtain a standard identification card image. Wherein, the perspective transformation function can be a warPeractive function in an openCV tool.
In the image processing method provided in the foregoing embodiment, on one hand, the second circumscribed polygon, which is the actual circumscribed polygon of the target object, may be accurately determined according to the contour line of the target object and the first circumscribed polygon; on the other hand, the image of the target object can be corrected according to the first external polygon and the second external polygon, so that a standard image of the target object can be obtained, and the subsequent processing precision of the image of the target object can be improved.
FIG. 4 is a flow diagram illustrating another method of image processing according to an exemplary embodiment.
In some embodiments, determining the second circumscribing polygon of the target object based on the contour line and the first circumscribing polygon in the embodiment shown in FIG. 3 may include the steps shown in FIG. 4.
Step S2031, determining a closest point on the contour line for each edge of the first circumscribed polygon, respectively, as a first fixed point.
As shown in fig. 3, the outline 301 of the id card area and the first circumscribed polygon 302 may be obtained through the steps shown in fig. 2.
In some embodiments, a closest point is determined on the contour line for each edge of the first circumscribed polygon as the first fixed point.
In some embodiments, for the image to be processed including the identification card, assuming that the first circumscribed polygon 302 is a rectangle, a closest point may be determined on the contour line for each edge of the first circumscribed polygon as the first fixed point. In an embodiment, the first fixed point is not limited to only one point, but rather a set of points, for example, if the first circumscribed polygon 302 of the rectangle includes four edges, four first fixed points may be determined on the outline.
In some embodiments, the plurality of vertices of the first circumscribed polygon may be numbered in a certain order (e.g., in a clockwise order, but the present disclosure is not limited thereto, and may also be numbered in a counterclockwise order), for example, if the first circumscribed polygon includes n sides, and n is a positive integer greater than or equal to 3, PR [ i ] (i ═ 0,1,2,3 … …, n) may be encoded. Wherein the straight line between PR [ i ] and PR [ (i + 1)% n ] is coded as LR [ i ]. As shown in FIG. 3, four vertices and four sides of the first circumscribed polygon of the identification card area may be encoded as shown, and a line between vertices PR [3] and PR [0] may be encoded as LR [3 ].
In some embodiments, the euclidean distance DC from each point PC on the contour to the line LR [ i ] may be calculated, the point PC and the corresponding distance DC forming a doublet AC ═ (PC, DC). And (3) sorting the binary group AC in an ascending order by taking the distance DC in the binary group AC as a key word, wherein the sorted AC [0] [0] is the point closest to the LR [ i ] in the contour line, namely the first fixed point corresponding to the edge LR [ i ].
In some embodiments, the above operations may be performed once for each edge of the first circumscribed polygon to determine a first fixed point corresponding to each edge.
As shown in FIG. 3, the LR [3] side of the first circumscribed polygon of the ID card area is taken as an example, and the first fixed point PL 0304 corresponding to the side LR [3] can be determined from the distance between the point on the outline and the straight line LR [3 ].
Step S2032 of determining second fixed points corresponding to the first fixed points on the contour lines, respectively.
In some embodiments, the second vertices corresponding to the respective edges of the first circumscribed polygon may be determined from the first vertices corresponding to the respective edges.
In some embodiments, the first circumscribing polygon includes a first edge that includes a first vertex and a second vertex, so determining a second fixed point from each first fixed point may include the steps shown in fig. 5.
Step S20321, a first distance and a second distance between a first fixed point corresponding to the first edge and the first vertex and the second vertex are obtained.
In some embodiments, the first distance and the second distance refer to euclidean distances.
In some embodiments, an image coordinate system may be constructed in the image to be processed, and then the first fixed point, the first vertex and the second vertex are mapped in the image coordinate system, and the first distance and the second distance between them are further calculated. For example, in the diagram shown in fig. 3, an image coordinate system may be constructed with a vertex at the lower left corner of the image to be processed as an origin, a horizontal direction as an abscissa, and a vertical direction as an ordinate.
Step S20322, if the first distance is greater than the second distance, determining a point set formed by points in the contour line, the points having a distance to the first fixed point greater than a distance threshold and having a distance to the first vertex less than the first distance.
In some embodiments, if the first distance is greater than a second distance, determining in the contour line a set of points formed by points having a distance from the first fixed point greater than a distance threshold and a distance from the first vertex less than the first distance; and if the first distance is smaller than the second distance, determining a point set formed by points which have a distance larger than a distance threshold value and have a distance smaller than the second distance from the second vertex in the contour line.
As shown in FIG. 3, with vertex PR [0] as the first vertex and PR [3] as the second vertex, a first distance from first vertex PL0 to first vertex PR [0] and a second distance from second vertex PR [3] are calculated. It is determined that the first distance is greater than the second distance, so a set of points from the contour line is determined that are formed from points having a distance from the first fixed point that is greater than a distance threshold and a distance from the first vertex PR [0] that is less than the first distance.
In some embodiments, the distance threshold may be one fifth of the smaller of the width R _ W and the height R _ H of the first circumscribed polygon R, i.e. the distance threshold is min (R _ W, R _ H)/5.
Step S20323 of determining, as the second fixed point, a point having a minimum distance from the first edge from the set of points.
In the embodiment shown in fig. 5, the second fixed point is determined based on the first fixed point based on the idea of approximation, so as to facilitate the subsequent determination of the second circumscribed polygon.
Step S2033, determining each target straight line according to each first fixed point and its corresponding second fixed point.
In some embodiments, the first circumscribed polygon obtained from the to-be-processed image of the target object may have a plurality of edges, and the corresponding first and second fixed points may be determined for the plurality of edges of the first circumscribed polygon according to the steps of fig. 2 and 5, respectively. And respectively connecting the first fixed point and the second fixed point corresponding to the plurality of sides of the first circumscribed polygon to obtain the target straight lines corresponding to the plurality of sides.
As shown in FIG. 3, the LR [3] side of the first circumscribed polygon of the ID card image corresponds to a first fixed point PL0 and a second fixed point PL 1305, and the target straight line LC [3]306 corresponding to the side LR [3] can be determined by connecting the first fixed point PL 0304 and the second fixed point PL 1305.
Step S2034, determining each target vertex according to each target straight line, and determining a second external polygon of the target object according to each target vertex.
In some embodiments, the first circumscribing polygon of the target object includes a first edge; wherein, determining each target vertex according to each target straight line comprises the steps shown in fig. 6.
Step S20341, determining a target straight line corresponding to the first edge according to the first fixed point and the second fixed point corresponding to the first edge.
In some embodiments, the first edge of the first circumscribed polygon of the target object does not refer to a specific edge, but refers to any one of the edges of the first circumscribed polygon.
In some embodiments, a second fixed point corresponding to the first edge of the first circumscribed polygon of the target object may be determined step by step according to the image shown in fig. 5, and a target straight line corresponding to the first edge may be determined by connecting the first fixed point and the second fixed point of the first edge of the first circumscribed polygon of the target object.
Step S20342, extending the intersection between the target line corresponding to the first edge and the first circumscribed polygon at a first point and a second point, where the first point intersects the first edge.
In some embodiments, the first point is a point where the first side of the first circumscribed polygon intersects with the target line corresponding to the first side.
Step S20343, determining that the second point is a target vertex corresponding to the first edge.
In some embodiments, the point at which the target line of the first side of the first circumscribed polygon does not intersect the first side is determined to be a target vertex. As shown in FIG. 3, let LR [3] be the first side of the first circumscribed polygon, and the target straight line defined by the first fixed point PL0 and the second fixed point PL1 of the first side intersect the first circumscribed polygon at two points, wherein the intersection point not intersecting the first side LR [3] is the second point. The second point is the target vertex PQ [3] corresponding to the first side of the first circumscribed polygon.
In the embodiment shown in fig. 6, the target vertex corresponding to the first edge of the target object is conveniently determined according to the target straight line. According to the method, the target vertex corresponding to each side of the first circumscribed polygon of the target object can be rapidly determined.
In some embodiments, after determining the target vertices corresponding to the respective edges of the first circumscribed polygon according to the method shown in fig. 6, the second circumscribed polygon of the target object may be determined by sequentially connecting the target vertices of the respective edges in order (e.g., clockwise order).
With continued reference to FIG. 3, the method determines the target vertices PQ [0], PQ [1], PQ [2], PQ [3] corresponding to the multiple edges of the first circumscribed polygon of the ID card, and connects the target vertices in order to determine the second circumscribed polygon 307 of the target object, wherein the four edges of the second circumscribed polygon are LC [0], LC [1], LC [2], and LC [3], respectively.
The embodiment shown in fig. 4 provides a technical solution, on one hand, a second fixed point is determined according to a first fixed point based on an approximation idea, and the second fixed point is favorable for determining an edge of a second circumscribed polygon; and on the other hand, a target straight line is determined according to the first fixed point and the second fixed point, a target vertex is further determined, and finally a second external polygon is determined based on the target vertex.
FIG. 7 is a flow chart illustrating yet another image processing method according to an exemplary embodiment. Referring to fig. 7, the image processing method provided by the present embodiment may include the following steps.
Step S701, obtaining an identity card image to be processed.
Referring to FIG. 3, in some embodiments a pending image of an identification card may be acquired as shown in FIG. 3.
Step S702, determining the contour line of the identity card and a first external polygon of the identity card in the identity card image to be processed.
With continued reference to fig. 3, in some embodiments, the outline 301 of the identification card and the first circumscribing polygon 302 may be determined in the identification card pending image using a dhSegment document layout analysis algorithm.
Step S703, numbering the edges of the first circumscribed polygon clockwise.
In some embodiments, the four sides of the rectangle may be numbered in clockwise order. As shown in fig. 3, the first circumscribed polygon of the identification card is a rectangle, and is numbered LR [ i ] (i is 0,1,2,3) for four sides of the rectangle in time.
In step S704, N is equal to 1, and N is equal to the number of edges of the first circumscribed polygon.
In some embodiments, the first circumscribed polygon of the identification card is a rectangle, so the corresponding N equals 4.
Step S705, determining a closest point on the contour line for the nth edge of the first circumscribed polygon as a first fixed point on the nth edge.
Step S706, a first distance and a second distance from the first fixed point corresponding to the nth edge to the first vertex and the second vertex of the nth edge are obtained.
In some embodiments, the first distance and the second distance refer to euclidean distances.
In some embodiments, an image coordinate system may be constructed in the image to be processed, and then the first fixed point, the first vertex and the second vertex are mapped in the image coordinate system, and the first distance and the second distance between them are further calculated. For example, in the diagram shown in fig. 3, an image coordinate system may be constructed with a vertex at the lower left corner of the image to be processed as an origin, a horizontal direction as an abscissa, and a vertical direction as an ordinate.
Step S707, if the first distance is greater than the second distance, determining a point set formed by points having a distance greater than a distance threshold from the first fixed point and a distance less than the first distance from the first vertex from the contour line.
As shown in FIG. 3, with vertex PR [0] as the first vertex and PR [3] as the second vertex, a first distance from first vertex PL0 to first vertex PR [0] and a second distance from second vertex PR [3] are calculated. It is determined that the first distance is greater than the second distance, so a set of points from the contour line is determined that are formed from points having a distance from the first fixed point that is greater than a distance threshold and a distance from the first vertex PR [0] that is less than the first distance.
In some embodiments, if the first distance is greater than a second distance, determining in the contour line a set of points formed by points having a distance from the first fixed point greater than a distance threshold and a distance from the first vertex less than the first distance; and if the first distance is smaller than the second distance, determining a point set formed by points which have a distance larger than a distance threshold value and have a distance smaller than the second distance from the second vertex in the contour line.
In some embodiments, the distance threshold may be one fifth of the smaller of the width R _ W and the height R _ H of the first circumscribed polygon R, i.e. the distance threshold is min (R _ W, R _ H)/5.
In step S708, a point having the smallest distance to the nth edge is determined from the point set as the second fixed point of the nth edge.
In step S709, an intersection of a straight line formed by the first fixed point and the second fixed point of the nth side and the nth-1 (if N is 1, N-1 is equal to N) th side is taken as the target vertex.
In step S710, it is determined whether N is equal to N. If N is equal to N, go to step S712; if it is determined that N does not equal N, step S711 is performed.
In step S711, let N be N +1, and steps S705 to S710 are executed in a loop until N is determined to be equal to N.
In step S712, the target vertices corresponding to the edges are sequentially connected to form a second circumscribed polygon.
In some embodiments, after determining the target vertices corresponding to the respective edges of the first circumscribed polygon according to the method shown in fig. 6, the second circumscribed polygon of the target object may be determined by sequentially connecting the target vertices of the respective edges in order (e.g., clockwise order).
With continued reference to FIG. 3, the method determines the target vertices PQ [0], PQ [1], PQ [2], PQ [3] corresponding to the multiple edges of the first circumscribed polygon of the ID card, and connects the target vertices in order to determine the second circumscribed polygon 307 of the target object, wherein the four edges of the second circumscribed polygon are LC [0], LC [1], LC [2], and LC [3], respectively.
Step S713, based on the first external polygon and the second external polygon, correcting the certificate by using a perspective transformation function to obtain a standard certificate image.
In the embodiment of fig. 7, on one hand, by acquiring the contour line and the first circumscribed polygon in the image to be processed, the second circumscribed polygon of the target object may be determined based on an approximation algorithm to obtain the real area of the target object. On the other hand, the real area of the target object can be corrected according to the first external polygon and the second external polygon to obtain a standard target object image, and accurate positioning and correction of the target object in the image to be processed can be achieved through the method.
Fig. 8 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment. Referring to fig. 8, the apparatus 800 includes: an image acquisition module 801, a pre-processing module 802, a second external polygon generation module 803, and a target object image acquisition module 804.
The image acquisition module 801 may be configured to acquire an image to be processed including a target object. The preprocessing module 802 may be configured to determine a contour line of the target object and a first circumscribed polygon of the target object in the image to be processed. The second circumscribing polygon generating module 803 may be configured to determine a second circumscribing polygon of the target object from the contour line and the first circumscribing polygon. The target object image acquisition module 804 may be configured to rectify the target object according to the first circumscribed polygon and the second circumscribed polygon to obtain a standard target object image.
In some embodiments, as shown in fig. 9, the second circumscribed polygon generation module 803 includes: a first fixed point determining unit 8031, a second fixed point determining unit 8032, a target straight line determining unit 8033, and a second circumscribed polygon determining unit 8034.
The first fixed point determining unit 8031 may be configured to determine, on the contour line, a closest point for each edge of the first circumscribed polygon as the first fixed point.
The second fixed point determining unit 8032 may be configured to determine second fixed points corresponding to the respective first fixed points on the contour line, respectively.
The target straight line determination unit 8033 may be configured to determine each target straight line from each first fixed point and its corresponding second fixed point.
The second circumscribed polygon determining unit 8034 may be configured to determine each target vertex from the each target straight line, and determine a second circumscribed polygon of the target object from the each target vertex.
In some embodiments, the first circumscribed polygon includes a first edge including a first vertex and a second vertex, and as shown in fig. 10, the second fixed point determining unit 8032 may include: distance determining subunit 80321, point set determining subunit 80322, and determining second pointing subunit 80323.
Wherein, the distance determining subunit 80321 may be configured to obtain a first distance and a second distance between a first fixed point corresponding to the first edge to the first vertex and the second vertex.
The point set determination subunit 80322 may be configured to determine, if the first distance is greater than the second distance, a point set formed by points from the contour line whose distance to the first fixed point is greater than a distance threshold and whose distance to the first vertex is less than the first distance.
Determining a second fix point sub-unit 80323 determines a point from the set of points having a smallest distance to the first edge as the second fix point.
In some embodiments, the first circumscribed polygon includes a first edge, and as shown in fig. 11, the second circumscribed polygon determining unit 8034 may include: the determination target straight line subunit 80341, the first point determination subunit 80342, and the target vertex determination subunit 80343.
The determine target straight line subunit 80341 may be configured to determine the target straight line corresponding to the first edge according to the first fixed point and the second fixed point corresponding to the first edge.
The first point determining subunit 80342 may be configured to extend that a target straight line corresponding to the first edge intersects the first circumscribed polygon at a first point and a second point, wherein the first point intersects the first edge.
The target vertex determining subunit 80343 may be configured to determine that the second point is the target vertex corresponding to the first edge.
In some embodiments, the first circumscribing polygon is a rectangle and the second circumscribing polygon is a quadrilateral.
In some embodiments, the target object image acquisition module 804 may be further configured to rectify the target object using a perspective transformation function based on the first circumscribing polygon and the second circumscribing polygon to obtain a standard target object image.
In some embodiments, the pre-processing module 802 may be further configured to: and determining the contour line of the target object and a first circumscribed polygon of the target object in the image to be processed based on a document layout analysis algorithm.
Since each functional block of the image processing apparatus 800 according to the exemplary embodiment of the present disclosure corresponds to the step of the exemplary embodiment of the image processing method described above, it is not described herein again.
Referring now to FIG. 12, shown is a block diagram of a computer system 1200 suitable for use in implementing a terminal device of an embodiment of the present application. The terminal device shown in fig. 12 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 12, the computer system 1200 includes a Central Processing Unit (CPU)1201, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)1202 or a program loaded from a storage section 1208 into a Random Access Memory (RAM) 1203. In the RAM1203, various programs and data necessary for the operation of the system 1200 are also stored. The CPU 1201, ROM 1202, and RAM1203 are connected to each other by a bus 1204. An input/output (I/O) interface 1205 is also connected to bus 1204.
The following components are connected to the I/O interface 1205: an input section 1206 including a keyboard, a mouse, and the like; an output portion 1207 including a display device such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 1208 including a hard disk and the like; and a communication section 1209 including a network interface card such as a LAN card, a modem, or the like. The communication section 1209 performs communication processing via a network such as the internet. A driver 1210 is also connected to the I/O interface 1205 as needed. A removable medium 1211, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 1210 as necessary, so that a computer program read out therefrom is mounted into the storage section 1208 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 1209, and/or installed from the removable medium 1211. The computer program performs the above-described functions defined in the system of the present application when executed by the Central Processing Unit (CPU) 1201.
It should be noted that the computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a transmitting unit, an obtaining unit, a determining unit, and a first processing unit. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to perform functions comprising: acquiring an image to be processed including a target object; determining a contour line of the target object and a first circumscribed polygon of the target object in the image to be processed; determining a second external polygon of the target object according to the contour line and the first external polygon; and correcting the target object according to the first external polygon and the second external polygon to obtain a standard target object image.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution of the embodiment of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computing device (which may be a personal computer, a server, a mobile terminal, or a smart device, etc.) to execute the method according to the embodiment of the present disclosure, such as one or more of the steps shown in fig. 2.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the disclosure is not limited to the details of construction, the arrangements of the drawings, or the manner of implementation that have been set forth herein, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (9)

1. An image processing method, comprising:
acquiring an image to be processed including a target object;
determining a contour line of the target object and a first circumscribed polygon of the target object in the image to be processed;
determining a point with the closest distance for each side of the first circumscribed polygon on the contour line respectively to serve as a first fixed point;
respectively determining second fixed points corresponding to the first fixed points on the contour line;
determining each target straight line according to each first fixed point and the corresponding second fixed point;
determining each target vertex according to each target straight line, and determining a second external polygon of the target object according to each target vertex;
and correcting the target object according to the first external polygon and the second external polygon to obtain a standard target object image.
2. The method of claim 1, wherein the first circumscribing polygon comprises a first edge, the first edge comprising a first vertex and a second vertex; wherein, confirm the second fixed point corresponding to every first fixed point separately on the said contour line, including:
obtaining a first distance and a second distance from a first fixed point corresponding to the first edge to the first vertex and the second vertex;
if the first distance is greater than the second distance, determining a point set formed by points, the distance between which and the first fixed point is greater than a distance threshold value and the distance between which and the first vertex is less than the first distance, from the contour line;
determining a point having a smallest distance to the first edge from the set of points as the second fixed point.
3. The method of claim 1, wherein the first circumscribing polygon comprises a first edge; wherein determining each target vertex according to each target straight line comprises:
determining a target straight line corresponding to the first edge according to a first fixed point and a second fixed point corresponding to the first edge;
extending a target straight line corresponding to the first edge to intersect with the first circumscribed polygon at a first point and a second point, wherein the first point intersects with the first edge;
and determining that the second point is the target vertex corresponding to the first edge.
4. The method of claim 1, wherein the first circumscribed polygon is a rectangle and the second circumscribed polygon is a quadrilateral.
5. The method of claim 1, wherein said rectifying the target object according to the first circumscribing polygon and the second circumscribing polygon to obtain a standard target object image comprises:
and based on the first external polygon and the second external polygon, correcting the target object by using a perspective transformation function to obtain a standard target object image.
6. The method of claim 1, wherein determining the contour line of the target object and the first circumscribed polygon of the target object in the image to be processed comprises:
and determining the contour line of the target object and a first circumscribed polygon of the target object in the image to be processed based on a document layout analysis algorithm.
7. An image processing apparatus characterized by comprising:
an image acquisition module configured to acquire an image to be processed including a target object;
the preprocessing module is configured to determine a contour line of the target object and a first circumscribed polygon of the target object in the image to be processed;
a second circumscribed polygon generation module configured to determine a closest point on the contour line for each edge of the first circumscribed polygon as a first fixed point; respectively determining second fixed points corresponding to the first fixed points on the contour line; determining each target straight line according to each first fixed point and the corresponding second fixed point; determining each target vertex according to each target straight line, and determining a second external polygon of the target object according to each target vertex;
and the target object image acquisition module is configured to correct the target object according to the first external polygon and the second external polygon so as to obtain a standard target object image.
8. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
9. A computer-readable medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN201910711755.5A 2019-08-02 2019-08-02 Image processing method and device, electronic equipment and computer readable medium Active CN110414502B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910711755.5A CN110414502B (en) 2019-08-02 2019-08-02 Image processing method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910711755.5A CN110414502B (en) 2019-08-02 2019-08-02 Image processing method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN110414502A CN110414502A (en) 2019-11-05
CN110414502B true CN110414502B (en) 2022-04-01

Family

ID=68365525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910711755.5A Active CN110414502B (en) 2019-08-02 2019-08-02 Image processing method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN110414502B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110827301B (en) * 2019-11-11 2023-09-26 京东科技控股股份有限公司 Method and apparatus for processing image
CN110909816B (en) * 2019-11-29 2022-11-08 泰康保险集团股份有限公司 Picture identification method and device
CN111464716B (en) * 2020-04-09 2022-08-19 腾讯科技(深圳)有限公司 Certificate scanning method, device, equipment and storage medium
CN112949589A (en) * 2021-03-31 2021-06-11 深圳市商汤科技有限公司 Target detection method, device, equipment and computer readable storage medium
CN114638818B (en) * 2022-03-29 2023-11-03 广东利元亨智能装备股份有限公司 Image processing method, device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103587A (en) * 2017-06-05 2017-08-29 新疆大学 A kind of inclined bearing calibration of biochip image and device
CN108446698A (en) * 2018-03-15 2018-08-24 腾讯大地通途(北京)科技有限公司 Method, apparatus, medium and the electronic equipment of text are detected in the picture

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9122921B2 (en) * 2013-06-12 2015-09-01 Kodak Alaris Inc. Method for detecting a document boundary

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103587A (en) * 2017-06-05 2017-08-29 新疆大学 A kind of inclined bearing calibration of biochip image and device
CN108446698A (en) * 2018-03-15 2018-08-24 腾讯大地通途(北京)科技有限公司 Method, apparatus, medium and the electronic equipment of text are detected in the picture

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
dhSegment: A Generic Deep-Learning Approach for Document Segmentation;Sofia Ares Oliveira et al;《2018 16th International Conference on Frontiers in Handwriting Recognition》;20181210;第8页右栏第3-7段,图3 *
畸变图像的目标区域自动提取及校正算法研究;江磊等;《软件导刊》;20190131;第18卷(第1期);第100页右栏第4段至第102页右栏第2段,图7 *

Also Published As

Publication number Publication date
CN110414502A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110414502B (en) Image processing method and device, electronic equipment and computer readable medium
CN108509915B (en) Method and device for generating face recognition model
CN108898186B (en) Method and device for extracting image
CN108229419B (en) Method and apparatus for clustering images
US20200111203A1 (en) Method and apparatus for generating vehicle damage information
US20120263352A1 (en) Methods and systems for verifying automatic license plate recognition results
CN107729935B (en) The recognition methods of similar pictures and device, server, storage medium
CN114550177B (en) Image processing method, text recognition method and device
CN110163205B (en) Image processing method, device, medium and computing equipment
CN110490959B (en) Three-dimensional image processing method and device, virtual image generating method and electronic equipment
CN112016638B (en) Method, device and equipment for identifying steel bar cluster and storage medium
CN108491812B (en) Method and device for generating face recognition model
CN113657274B (en) Table generation method and device, electronic equipment and storage medium
CN114429637B (en) Document classification method, device, equipment and storage medium
CN109345460B (en) Method and apparatus for rectifying image
CN112580666A (en) Image feature extraction method, training method, device, electronic equipment and medium
CN114743062A (en) Building feature identification method and device
CN114821255A (en) Method, apparatus, device, medium and product for fusion of multimodal features
CN111815748B (en) Animation processing method and device, storage medium and electronic equipment
CN113205090A (en) Picture rectification method and device, electronic equipment and computer readable storage medium
CN113793370A (en) Three-dimensional point cloud registration method and device, electronic equipment and readable medium
CN115620321B (en) Table identification method and device, electronic equipment and storage medium
EP4083938A2 (en) Method and apparatus for image annotation, electronic device and storage medium
CN110782390A (en) Image correction processing method and device and electronic equipment
CN113658195B (en) Image segmentation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Floor 36, Zheshang Building, No. 718 Jianshe Avenue, Jiang'an District, Wuhan, Hubei 430019

Patentee after: TK.CN INSURANCE Co.,Ltd.

Patentee after: TAIKANG INSURANCE GROUP Co.,Ltd.

Address before: 156 fuxingmennei street, Xicheng District, Beijing 100031

Patentee before: TAIKANG INSURANCE GROUP Co.,Ltd.

Patentee before: TK.CN INSURANCE Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230828

Address after: Floor 36, Zheshang Building, No. 718 Jianshe Avenue, Jiang'an District, Wuhan, Hubei 430019

Patentee after: TK.CN INSURANCE Co.,Ltd.

Address before: Floor 36, Zheshang Building, No. 718 Jianshe Avenue, Jiang'an District, Wuhan, Hubei 430019

Patentee before: TK.CN INSURANCE Co.,Ltd.

Patentee before: TAIKANG INSURANCE GROUP Co.,Ltd.

Effective date of registration: 20230828

Address after: Building A3 (formerly Building B2), Phase 1.1, Wuhan Software New City, No. 9 Huacheng Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province, 430074, 104-14

Patentee after: Taikang Online Health Technology (Wuhan) Co.,Ltd.

Address before: Floor 36, Zheshang Building, No. 718 Jianshe Avenue, Jiang'an District, Wuhan, Hubei 430019

Patentee before: TK.CN INSURANCE Co.,Ltd.