CN113901914A - Image recognition method and device, electronic equipment and readable storage medium - Google Patents

Image recognition method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113901914A
CN113901914A CN202111168238.1A CN202111168238A CN113901914A CN 113901914 A CN113901914 A CN 113901914A CN 202111168238 A CN202111168238 A CN 202111168238A CN 113901914 A CN113901914 A CN 113901914A
Authority
CN
China
Prior art keywords
identified
area
region
updated
side information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111168238.1A
Other languages
Chinese (zh)
Inventor
隋远
王新宇
李瑞远
鲍捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong City Beijing Digital Technology Co Ltd
Original Assignee
Jingdong City Beijing Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong City Beijing Digital Technology Co Ltd filed Critical Jingdong City Beijing Digital Technology Co Ltd
Priority to CN202111168238.1A priority Critical patent/CN113901914A/en
Publication of CN113901914A publication Critical patent/CN113901914A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides an image recognition method, including: respectively determining a minimum external frame and a side information set of two to-be-identified areas in the image, wherein the side information set comprises position information of a plurality of sides of the to-be-identified areas. And respectively deleting the position information of the target edge from the two edge information sets, wherein the target edge comprises an edge which is positioned outside the minimum external frame of the other to-be-identified region in a plurality of edges of one to-be-identified region, and obtaining the two updated edge information sets. And under the condition that whether the two updated side information sets are different, selecting to identify an intersection area between the two areas to be identified according to the position relation between any point in the areas to be identified corresponding to the non-empty sets and the other area to be identified or the position relation between the two updated second side information sets. The present disclosure provides an image recognition apparatus, an electronic device, and a computer-readable storage medium.

Description

Image recognition method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image recognition method and apparatus, an electronic device, and a readable storage medium.
Background
With the continuous development of the spatial measurement technology, the space-time data is more and more applied. For example, spatial measurement techniques may monitor changes in the state of the earth using telemetry data and surface coverage data. The space measurement technology can also excavate a high-efficiency parking and passenger-receiving place through the track data of the taxi and the urban thermodynamic diagram.
In implementing the disclosed concept, the inventors found that there are at least the following problems in the related art: when complex space-time data is faced, if a global optimization algorithm is adopted to judge all spatial topological relations, a great amount of calculation is generated in the judging process. The determination process may also include a number of repeated calculation steps.
Disclosure of Invention
In view of the above, the present disclosure provides an image recognition method and apparatus.
One aspect of the present disclosure provides an image recognition method, including: respectively determining a first minimum external frame and a first side information set of a first region to be identified and a second minimum external frame and a second side information set of a second region to be identified in the image, wherein the first and second side information sets respectively comprise position information of a plurality of sides of the first and second regions to be identified; deleting the position information of a target side from the first side information set and the second side information set respectively, wherein the target side comprises a side positioned outside a second minimum external frame in a plurality of sides of the first region to be identified and a side positioned outside the first minimum external frame in a plurality of sides of the second region to be identified, and obtaining an updated first side information set and an updated second side information set; under the condition that the updated first side information set or the updated second side information set is determined to be a non-empty set, identifying an intersection area between the first to-be-identified area and the second to-be-identified area according to the position relation between any point in the to-be-identified area corresponding to the non-empty set and the other to-be-identified area; and under the condition that the first side information set after updating and the second side information set after updating are determined to be non-empty sets, identifying an intersection area between the first area to be identified and the second area to be identified according to the position relation of the first side information set after updating and the second side information set after updating
According to an embodiment of the present disclosure, in a case that the updated first edge information set is a non-empty set, identifying an intersection area between the first to-be-identified area and the second to-be-identified area according to a position relationship between any one point in the to-be-identified area corresponding to the non-empty set and another to-be-identified area includes: determining a center point of the second smallest circumscribing box; determining a ray passing through the central point by taking any point in the first region to be identified as a starting point; and under the condition that the intersection times of the ray and a plurality of edges of the second area to be identified are determined to be odd, identifying an intersection area existing between the first area to be identified and the second area to be identified.
According to an embodiment of the present disclosure, in a case that the ray passes through a vertex of the second region to be recognized, the recognizing, according to a positional relationship between any point in the region to be recognized corresponding to the non-empty set and another region to be recognized, an intersection region between the first region to be recognized and the second region to be recognized, further includes: under the condition that two adjacent edges of the vertex in the second area are located on the same side of the ray, adding one to the intersection times of the ray and the edges of the second area to be identified; and under the condition that the intersection times of the ray and a plurality of edges of the second area to be identified are determined to be odd, identifying an intersection area existing between the first area to be identified and the second area to be identified.
According to an embodiment of the present disclosure, identifying an intersection area between the first to-be-identified area and the second to-be-identified area according to a position relationship between the updated first side information set and the updated second side information set includes: respectively determining the intersection points of straight lines where a plurality of edges are located in the updated first edge information set and the straight lines where a plurality of edges are located in the updated second edge set; and under the condition that the intersection points are determined to be positioned on the edge, identifying an intersection area existing between the first to-be-identified area and the second to-be-identified area.
According to an embodiment of the present disclosure, determining a set of side information of a region to be identified in an image includes: constructing a rectangular coordinate system by taking any point in the image as an origin; acquiring coordinate information of a plurality of edges of the area to be identified; and determining the side information set of the area to be identified according to the coordinate information of the sides.
According to an embodiment of the present disclosure, the deleting the position information of the target edge from the first and second sets of side information, respectively, includes: determining the position information of the target edge of the first edge information set according to the coordinate information of a plurality of edges in the first edge information set and the coordinate information of a second minimum external frame, and determining the position information of the target edge of the second edge information set according to the coordinate information of a plurality of edges in the second edge information set and the coordinate information of the first minimum external frame; and deleting the position information of the target edge from the first and second sets of side information.
According to an embodiment of the disclosure, a plurality of edges of the area to be identified constitute a geometric figure.
Another aspect of the present disclosure provides an image recognition apparatus including: the first determining module is used for respectively determining a first minimum external frame and a first side information set of a first region to be identified and a second minimum external frame and a second side information set of a second region to be identified in the image, wherein the first and second side information sets respectively comprise position information of a plurality of sides of the first region to be identified and the second region to be identified; a deleting module, configured to delete position information of a target edge from the first and second edge information sets, respectively, where the target edge includes an edge located outside a second minimum bounding box among multiple edges of the first to-be-identified region, and multiple edges of the second to-be-identified region are located outside the first minimum bounding box, so as to obtain an updated first edge information set and an updated second edge information set; the second determining module is used for identifying an intersection area between the first area to be identified and the second area to be identified according to the position relation between any point in the area to be identified corresponding to the non-empty set and another area to be identified under the condition that the updated first side information set or the updated second side information set is determined to be the non-empty set; and the third determining module is used for identifying the intersection area between the first area to be identified and the second area to be identified according to the position relation of the updated first side information set and the updated second side information set under the condition that the updated first side information set and the updated second side information set are both non-empty sets.
Another aspect of the present disclosure provides an electronic device comprising one or more processors; and memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of the preceding claims.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, because the technical means of classifying the edge set of the region to be recognized is adopted, the technical problem of overlarge calculation amount during the recognition of the intersection region is at least partially overcome, and the technical effect of improving the recognition efficiency is further realized.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an exemplary system architecture to which the method and apparatus of application image recognition of the present disclosure may be applied;
fig. 2 schematically illustrates an application scenario of the image recognition method and apparatus according to an embodiment of the present disclosure;
FIG. 3 schematically shows a flow chart of an image recognition method according to an embodiment of the present disclosure;
4A-4D schematically illustrate a diagram of an edge and a minimum bounding box of an area to be identified, according to an embodiment of the present disclosure;
FIG. 5A schematically illustrates a flow diagram for determining a set of side information for a region to be identified according to an embodiment of the present disclosure;
FIG. 5B schematically shows a flowchart for deleting location information of a target edge from a set of edge information, according to an embodiment of the present disclosure;
FIG. 5C schematically illustrates a flow chart for identifying an intersection between first and second regions to be identified, according to an embodiment of the present disclosure;
fig. 5D schematically shows a schematic diagram of a positional relationship of an arbitrary point and an area to be recognized according to an embodiment of the present disclosure;
FIG. 5E schematically illustrates a flow chart for identifying an intersection region between first and second regions to be identified according to another embodiment of the present disclosure;
fig. 5F schematically illustrates a schematic view of a positional relationship of an arbitrary point and a region to be recognized according to another embodiment of the present disclosure;
FIG. 5G schematically illustrates a flow chart for identifying an intersection region between first and second regions to be identified according to another embodiment of the present disclosure;
fig. 5H schematically illustrates a schematic diagram of a position relationship between edges of an area to be identified according to an embodiment of the present disclosure;
FIG. 5I schematically shows a graph of image recognition method and JTS library recognition rate according to the present disclosure;
fig. 6 schematically shows a block diagram of an image recognition apparatus according to an embodiment of the present disclosure; and
fig. 7 schematically shows a block diagram of an electronic device suitable for implementing an image recognition apparatus according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The embodiment of the disclosure provides an image identification method for identifying an intersection region of a plurality of regions to be identified in an image and a device capable of applying the method. The method comprises the steps of respectively determining a first minimum external frame and a first side information set of a first region to be identified and a second minimum external frame and a second side information set of a second region to be identified in an image, wherein the first and second side information sets respectively comprise position information of a plurality of sides of the first region to be identified and the second region to be identified; deleting the position information of a target side from the first side information set and the second side information set respectively, wherein the target side comprises a side positioned outside a second minimum external frame in a plurality of sides of the first region to be identified and a side positioned outside the first minimum external frame in a plurality of sides of the second region to be identified, and obtaining an updated first side information set and an updated second side information set; under the condition that the updated first side information set or the updated second side information set is determined to be a non-empty set, identifying an intersection area between the first to-be-identified area and the second to-be-identified area according to the position relation between any point in the to-be-identified area corresponding to the non-empty set and the other to-be-identified area; and under the condition that the first side information set after updating and the second side information set after updating are determined to be non-empty sets, identifying an intersection area between the first area to be identified and the second area to be identified according to the position relation of the first side information set after updating and the second side information set after updating.
Fig. 1 schematically illustrates an exemplary system architecture 100 to which the methods and apparatus of image recognition may be applied, according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired and/or wireless communication links, and so forth.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various messaging client applications installed thereon, such as a map-like application, an album-like application, a shopping-like application, a web browser application, a search-like application, an instant messaging tool, a mailbox client, and/or social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the image recognition method provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the image recognition apparatus provided by the embodiment of the present disclosure may be generally disposed in the server 105. The image recognition method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the image recognition apparatus provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Alternatively, the image recognition apparatus provided in the embodiment of the present disclosure may also be executed by the terminal device 101, 102, or 103, or may also be executed by another terminal device different from the terminal device 101, 102, or 103. Accordingly, the image recognition apparatus provided by the embodiment of the present disclosure may also be disposed in the terminal device 101, 102, or 103, or in another terminal device different from the terminal device 101, 102, or 103.
For example, the image to be processed may be originally stored in any one of the terminal apparatuses 101, 102, or 103 (for example, the terminal apparatus 101, but not limited thereto), or may be stored on an external storage apparatus and may be imported into the terminal apparatus 101. Then, the terminal device 101 may locally execute the image recognition method provided by the embodiment of the present disclosure, or send the image to be processed to another terminal device, server or server cluster, and execute the image recognition method provided by the embodiment of the present disclosure by another terminal device, server or server cluster receiving the image to be processed.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically illustrates an application scenario of the image recognition method and apparatus according to an embodiment of the present disclosure.
As shown in fig. 2, the same geographic image includes a first region to be identified 201 and a second region to be identified 202. The region boundary of the region to be identified may be constructed as a complex polygon including tens of thousands of vertices. The first region to be identified 201 further includes a large number of hole regions and self-intersection regions. For example, the first region to be recognized 201 is a target development region, and the second region to be recognized 202 is a developed region. Therefore, by identifying the intersection area of the first area to be identified 201 and the second area to be identified 202, it can be determined whether a developed area exists in the target development area, thereby implementing accurate development planning on the target development area.
Fig. 3 schematically shows a flow chart of an image recognition method according to an embodiment of the present disclosure.
As shown in fig. 3, the method includes operations S301 to S304.
In operation S301, a first minimum bounding box and a first set of side information of a first region to be recognized and a second minimum bounding box and a second set of side information of a second region to be recognized in an image are determined, respectively, where the first and second sets of side information include position information of a plurality of sides of the first and second regions to be recognized, respectively.
For example, the image may be a two-dimensional plane image or a three-dimensional stereo image. The region to be identified may be a two-dimensional planar region or a three-dimensional volumetric region. The edges of the area to be identified constitute a geometric figure. Therefore, the plurality of edges of the region to be identified form a geometric figure, including a plane geometric image and a solid geometric image.
According to the boundary distribution condition of the area to be identified, the area to be identified can be converted into a geometric figure. The side information set comprises position information of all geometric sides of the geometric figure corresponding to the area to be identified. And under the condition that the image is a two-dimensional plane image, the minimum circumscribed frame is the minimum circumscribed rectangle of the geometric figure corresponding to the area to be identified.
In operation S302, position information of a target edge is deleted from the first and second edge information sets, respectively, where the target edge includes an edge located outside the second minimum bounding box among the plurality of edges of the first region to be recognized, and an edge located outside the first minimum bounding box among the plurality of edges of the second region to be recognized, resulting in an updated first edge information set and an updated second edge information set.
In the embodiment of the disclosure, the position information of the edge which is not intersected with the other to-be-identified region in each edge information set is deleted according to the position relationship between the position information of the edge of one to-be-identified region and the minimum circumscribed frame of the other to-be-identified region, so that the data volume in the subsequent judgment process is reduced.
In addition, the updated side information set is classified according to the characteristics of the updated side information set. And aiming at the characteristics of different side information sets, different judgment methods are selected for the area to be identified, so that a simple judgment method is selected for a simple position relation, and the calculation amount in the judgment process is reduced.
In operation S303, in a case that it is determined that the updated first side information set or the updated second side information set is a non-empty set, an intersection area between the first to-be-identified area and the second to-be-identified area is identified according to a position relationship between any point in the to-be-identified area corresponding to the non-empty set and another to-be-identified area.
In operation S304, in a case that it is determined that the updated first side information set and the updated second side information set are both non-empty sets, an intersection area between the first and second areas to be identified is identified according to a position relationship between the updated first side information set and the updated second side information set.
In the embodiment of the disclosure, the intersection relationship between two areas to be identified is preliminarily determined by judging whether the updated side information set is a non-empty set.
The specific contents of the side information set and the updated side information set will be described below with reference to fig. 4A to 4D. Fig. 4A to 4D schematically illustrate a diagram of an edge and a minimum bounding box of an area to be identified according to an embodiment of the present disclosure.
As shown in fig. 4A, all the sides of the area B to be recognized are located outside the minimum bounding box 401 of the area a to be recognized. All the edges of the area a to be identified are also located outside the minimum bounding box 402 of the area B to be identified. Therefore, after deleting the position information of the side located outside another area to be identified in each side information set, the updated side information set S of the area to be identified aAAnd an updated set of side information S for the region B to be identifiedBAre all empty sets.
As shown in fig. 4B, all the sides of the area B to be recognized are located inside the minimum bounding box 403 of the area a to be recognized. All edges of the area a to be recognized are located outside the minimum bounding box 404 of the area B to be recognized. Therefore, after deleting the position information of the side located outside another area to be identified in each side information set, the updated side information set S of the area to be identified aAAs an empty set, an updated set of side information S for the region B to be identifiedBIs a non-empty set.
As shown in fig. 4C, all the sides of the area B to be recognized are located inside the minimum bounding box 405 of the area a to be recognized. All edges of the area a to be identified are located outside the minimum bounding box 406 of the area B to be identified. Therefore, after deleting the position information of the side located outside another area to be identified in each side information set, the updated side information set S of the area to be identified aAAs an empty set, an updated set of side information S for the region B to be identifiedBIs a non-empty set.
As shown in fig. 4D, part of the edges of the region B to be recognized are located outside the minimum bounding box 407 of the region a to be recognized. Part of the edges of the area a to be recognized are located outside the minimum bounding box 408 of the area B to be recognized. Therefore, after deleting the position information of the side located outside another area to be identified in each side information set, the updated side information set S of the area to be identified aAAnd an updated set of side information S for the region B to be identifiedBAre all non-empty sets.
In fig. 4A, the set S of updated side information due to the area a to be identifiedAAnd an updated set of side information S for the region B to be identifiedBAnd both the areas are empty sets, and it can be determined that no intersection area exists between the area A to be identified and the area B to be identified. Therefore, under the condition that the two updated side information sets are both empty sets, the subsequent judgment process can be ended, and the two areas to be identified are determined to have no intersection area. The amount of calculation of the judgment process can thereby be reduced.
In fig. 4B and 4C, the set S of updated side information due to the area a to be identifiedAAs an empty set, an updated set of side information S for the region B to be identifiedBFor the non-empty set, it may be determined that the position relationship between the area a to be recognized and the area B to be recognized may be that there is no intersection area between the area a to be recognized and the area B to be recognized, or that the area B to be recognized is located inside the area a to be recognized. In this case, the method of determining the positional relationship between the two regions to be recognized by judging the positional relationship between the points and the geometric figure is selected. Therefore, when only one of the two updated side information sets is a non-empty set, whether an intersection area exists between the two areas to be identified can be further identified according to the position relationship between any point in the area to be identified corresponding to the non-empty set and the other area to be identified.
In fig. 4D, the set S of updated side information due to the area a to be identifiedAAnd an updated set of side information S for the region B to be identifiedBAnd the non-empty sets can not directly determine whether an intersection area exists between the area A to be identified and the area B to be identified. In this case, the method of determining the positional relationship of the two regions to be identified by judging the positional relationship of the edges of the two geometric figures with respect to each other is selected. Therefore, under the condition that the two updated side information sets are both non-empty sets, whether an intersection area exists between the two areas to be identified can be determined according to the position relation between the first updated side information set and the second updated side information set.
According to the embodiment of the disclosure, a simpler and more efficient judgment method is correspondingly selected according to the types of the two updated side information sets, and the redundant calculation process is reduced. Therefore, the effects of reducing the calculation amount and saving the calculation time are achieved.
The method shown in fig. 3 is further described with reference to fig. 5A-5I in conjunction with specific embodiments.
Fig. 5A schematically shows a flow chart for determining a set of side information for a region to be identified according to an embodiment of the present disclosure.
As shown in fig. 5A, an example method of determining a set of side information of a region to be recognized in an image may include operations S511 to S513.
In operation S511, a rectangular coordinate system is constructed with an arbitrary point in the image as an origin;
in operation S512, coordinate information of a plurality of edges of the area to be recognized is acquired; and
in operation S513, a set of side information of the region to be recognized is determined according to the coordinate information of the plurality of sides.
In the embodiment of the disclosure, after a rectangular coordinate system is constructed with any point in an image as an origin, the position information of the edge of each region to be identified can be obtained through a unified standard. In the case where the image is a two-dimensional image, a planar rectangular coordinate system is constructed. In the case where the image is a three-dimensional image, a three-dimensional rectangular spatial coordinate system is constructed.
The coordinate information of the edge includes coordinate information of both end points. The value range of the edge in the direction of each coordinate axis. In a rectangular plane coordinate system, a definition domain of each edge in the X-axis direction and a value domain in the Y-axis direction are obtained. In the rectangular space coordinate system, the definition domain of each side in the X-axis and Y-axis directions and the value domain in the Z-axis direction are obtained.
Fig. 5B schematically shows a flowchart for deleting location information of a target edge from a set of edge information according to an embodiment of the present disclosure.
As shown in fig. 5B, an example method of deleting the location information of the target side from the first and second side information sets, respectively, may include operations S521 to S523, on the basis of operations S511 to S513.
Acquiring coordinate information of the first and second minimum frames in operation S521;
determining position information of a target edge of the first edge information set according to the coordinate information of the plurality of edges in the first edge information set and the coordinate information of the second minimum bounding box, and determining position information of a target edge of the second edge information set according to the coordinate information of the plurality of edges in the second edge information set and the coordinate information of the first minimum bounding box in operation S522; and
in operation S523, the position information of the target edge is deleted from the first and second sets of side information.
In the embodiment of the present disclosure, in the rectangular coordinate system, the coordinate information of the minimum bounding box is acquired. The coordinate information of the minimum bounding box includes maximum coordinate values and minimum coordinate values of the minimum bounding box in different coordinate axis directions within the coordinate system. And determining all edges of the to-be-identified region, which are positioned outside the minimum outer frame of the other to-be-identified region, by comparing the coordinate values of the two end points of each edge in the to-be-identified region with the maximum and minimum coordinate values of the minimum outer frame of the other to-be-identified region.
And after the target edge in each side information set is deleted, an updated side information set is obtained. Through the updated side information, the position relation of the two areas to be identified can be preliminarily judged, and the data volume of the side information set can be reduced, so that the calculation amount generated in the subsequent judgment process is reduced.
Fig. 5C schematically shows a flow chart for identifying an intersection area between the first and second areas to be identified according to an embodiment of the present disclosure.
And under the condition that the updated first side information set is a non-empty set, identifying an intersection area between the first area to be identified and the second area to be identified according to the position relation between any point in the area to be identified corresponding to the non-empty set and the other area to be identified.
As shown in fig. 5C, an example method of identifying an intersection area between first and second areas to be identified may include operations S531 to S533.
Determining a center point of a second minimum bounding box in operation S531;
in operation S532, a ray passing through a center point is determined, using any point in the first region to be identified as a starting point; and
in operation S533, in a case where it is determined that the number of intersections of the ray with the plurality of edges of the second region to be identified is an odd number, an intersection region existing between the first and second regions to be identified is identified.
In this embodiment of the present disclosure, in a case where it is determined that the updated first side information set is a non-empty set, the position relationship between the first area to be identified and the second area to be identified includes two cases: 1) the first area to be identified is positioned inside the second area to be identified; 2) the first area to be identified and the second area to be identified have no intersection area.
In this case, the method of determining the positional relationship between the two regions to be recognized by judging the positional relationship between the points and the geometric figure is selected. For example, whether the area to be identified corresponding to the non-empty set is inside another area to be identified may be determined by determining whether any point in the area to be identified corresponding to the non-empty set is inside another area to be identified.
For example, a ray passing through the center point of the second minimum bounding box is determined starting from any point in the first region to be identified. In the case that the number of times of intersection of the ray with the plurality of edges of the second region to be identified is determined to be odd, the intersection region existing between the first region to be identified and the second region to be identified can be determined, and the intersection region existing between the first region to be identified and the second region to be identified can be identified.
In the case where the geometry corresponding to the region to be recognized is a planar geometry, the center point of the minimum bounding box may be considered as the intersection point of the diagonals. In the case that the geometric figure corresponding to the region to be recognized is a solid geometric figure, the center point of the minimum bounding box can be regarded as the intersection point of the body diagonal lines. Therefore, the center point of the minimum bounding box is located inside the minimum bounding box. Therefore, with any point in the first region to be identified as a starting point, the ray passing through the center point of the second minimum bounding box passes through the inside of the second region to be identified.
Fig. 5D schematically shows a schematic diagram of a positional relationship between an arbitrary point and a region to be recognized according to an embodiment of the present disclosure.
As shown in fig. 5, when the number of intersections of the ray with the sides of the region to be recognized is odd, the starting point of the ray is inside the region to be recognized. Thus, the first to-be-identified area can be determined to be located in the second to-be-identified area, and the area where the first to-be-identified area and the second to-be-identified area intersect exists.
When the number of intersections of the ray with the sides of the region to be recognized is even, the starting point of the ray is outside the region to be recognized. It can thus be determined that there is no region of intersection between the first region to be identified and the second region to be identified.
Fig. 5E schematically shows a flow chart for identifying an intersection area between the first and second areas to be identified according to another embodiment of the present disclosure.
As shown in fig. 5E, in the case where the ray passes through the vertex of the second region to be recognized, recognizing the intersection region between the first and second regions to be recognized further includes operations S534 to S535 on the basis of operations S531 to S533.
For example, an example method of identifying an intersection region between first and second regions to be identified may include operations S534 to S535.
In operation S534, in a case that it is determined that two adjacent sides of the vertex in the second region are located on the same side as the ray, adding one to the number of times of intersection of the ray and the sides of the second region to be recognized; and
in operation S535, in case that it is determined that the number of intersections of the ray with the plurality of edges of the second region to be recognized is an odd number, an intersection region existing between the first and second regions to be recognized is recognized.
In the embodiment of the present disclosure, when a ray passes through a vertex of the second area to be identified, it is necessary to further determine a position relationship between the vertex and the ray, and further determine whether a starting point of the ray is inside the second area to be identified.
Fig. 5F schematically shows a schematic diagram of a positional relationship between an arbitrary point and a region to be recognized according to another embodiment of the present disclosure.
As shown in fig. 5, when a ray passes through the vertex of the second region to be recognized, in the case where it is determined that two adjacent sides of the vertex are located on the same side of the ray, the number of times of intersection of the ray with the sides of the second region to be recognized is counted plus one. And in the case that the adjacent two sides of the vertex are positioned on the opposite sides of the ray, recording the intersection times of the ray and the sides of the second area to be identified as unchanged. And determining an intersection region existing between the first to-be-identified region and the second to-be-identified region by determining that the updated intersection frequency is odd, and further identifying the intersection region existing between the first to-be-identified region and the second to-be-identified region.
Through the embodiment, when only one side information set in the two updated side information sets is a non-empty set, whether the two areas to be identified have the intersection area is determined by only judging the position relation between the point and the geometric image. Therefore, the determination of the intersection region through the position relation between the edge and the side of the two side information sets is avoided, and the calculation amount in the judgment process is reduced.
Fig. 5G schematically shows a flow chart for identifying an intersection area between the first and second areas to be identified according to another embodiment of the present disclosure.
And under the condition that the updated first side information set and the updated second side information set are both non-empty sets, identifying an intersection area between the first to-be-identified area and the second to-be-identified area according to the position relation of the updated first side information set and the updated second side information set.
As shown in fig. 5E, for example, an example method of identifying an intersection area between first and second areas to be identified may include operations S541 to S542.
In operation S541, intersection points between the straight lines where the plurality of sides in the updated first side information set are located and the straight lines where the plurality of sides in the updated second side information set are located are respectively determined; and
in operation S542, in a case where it is determined that the intersection points are all located on the edge, an intersection region existing between the first and second regions to be recognized is recognized.
In the embodiment of the present disclosure, since both the updated two side information sets are non-empty sets, in this case, the method of determining the position relationship between the two regions to be identified by determining the position relationship between the sides of the two geometric figures is selected. For example, traversing the updated first and second side information sets, and determining whether an edge in the updated first side information set intersects with an edge in the updated second side information set, thereby determining whether an intersection region exists between the first to-be-identified region and the second to-be-identified region, and further identifying the intersection region existing between the first to-be-identified region and the second to-be-identified region.
For example, a straight line equation of a straight line where each edge of the two updated edge information sets is located may be determined. And determining whether the edges between the two updated edge information sets are intersected or not by calculating the slope of the linear equation and calculating the coordinates of the intersection point of the two linear equations.
Fig. 5H schematically illustrates a schematic diagram of the edge-to-edge positional relationship of the region to be identified according to an embodiment of the present disclosure.
As shown in fig. 5h (a), the slope of the straight line where the edge of the first to-be-identified region is located is the same as the slope of the straight line where the edge of the first to-be-identified region is located, and the two edges are parallel and do not intersect each other. As shown in fig. 5h (b), when the slope of the straight line where the edge of the first region to be recognized is located is different from the slope of the straight line where the edge of the first region to be recognized is located, and the intersection point of the two straight lines is located on both the edges, it is determined that the two edges intersect. As shown in fig. 5h (c), when the slope of the straight line where the edge of the first region to be recognized is located is different from the slope of the straight line where the edge of the first region to be recognized is located, and the intersection point of the two straight lines is not located on the two edges, it is determined that the two edges do not intersect.
The method of determining whether the intersection of the straight lines is located on the two sides may be by comparing the coordinates of the intersection and the coordinates of the end points of the two sides.
And after determining that the plurality of edges in the updated first edge information set and the plurality of edges in the updated second edge set are in an intersecting relationship, identifying an intersecting area existing between the first to-be-identified area and the second to-be-identified area.
According to the embodiment of the disclosure, under the condition that the two updated side information sets are determined to be non-empty sets, whether the two areas to be identified have the intersection areas is determined by a method for judging the position relation between the sides of the side information sets. The image identification method classifies the updated side information set and preliminarily classifies the position relation of the two areas to be identified. And selecting a corresponding judgment method for the two areas to be identified with each position relationship, so that the two areas to be identified with simple position relationship are judged by a simple identification method, and the two areas to be identified with relatively complex position relationship are judged by a relatively complex identification method. The image identification method provided by the disclosure eliminates redundant calculation steps in the judgment process, reduces the overall calculation amount in the judgment process, saves the identification time and improves the identification efficiency.
In order to more fully highlight the effectiveness and advantages of the image recognition method provided by the embodiment of the disclosure, the image recognition method provided by the embodiment of the disclosure is adopted to perform recognition experiments on 3 kinds of surface data with different complexity levels. And (3) repeatedly carrying out 30 times of detection and measuring the time required by judgment by carrying out an intersection judgment experiment on the two surfaces. The three types of surface data respectively comprise 2.5 ten thousand pieces of vertex data, 9 ten thousand pieces of vertex data and 19 ten thousand pieces of vertex data.
FIG. 5I schematically shows a graph of image recognition method and JTS library recognition rate according to the present disclosure. The JTS library is a package of geometry algorithms contained in the open source component library GeoTools provided by the open source geographic information association (OSGeo). The JTS library adopts a nine-intersection model as the basis and the basis for judging the spatial position relationship.
As shown in fig. 5I, when the intersection experimental judgment is performed on a complex surface with 2.5 ten thousand vertex data, the average time consumed by using the JTS library is about 255 ms, and the image recognition method provided by the present disclosure consumes about 54 ms on average. For the judgment of an intersection experiment on a complex surface with 9 ten thousand vertex data, the average time consumption of the JTS library is 1.6 seconds, and the average time consumption of the image identification method provided by the disclosure is about 270 milliseconds. For the intersection experimental judgment of a complex surface with 19 ten thousand vertex data, the average time consumption of the JTS library is about 16.4 seconds, and the average time consumption of the image identification method provided by the disclosure is about 423 milliseconds.
As is apparent from the result data provided in fig. 5I, compared with the intersection determination performed by using the JTS library in the prior art, the image recognition method provided in the embodiment of the present disclosure is faster.
Fig. 6 schematically shows a block diagram of an image recognition apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, the image recognition apparatus 600 includes a first determination module 610, a deletion module 620, a second determination module 630, and a third determination module 640.
The first determining module 610 is configured to determine a first minimum bounding box and a first edge information set of a first region to be identified and a second minimum bounding box and a second edge information set of a second region to be identified in the image, respectively, where the first and second edge information sets respectively include position information of a plurality of edges of the first and second regions to be identified.
A deleting module 620, configured to delete the position information of the target edge from the first and second edge information sets, respectively, where the target edge includes an edge located outside the second minimum bounding box among the multiple edges of the first to-be-identified region, and an edge located outside the first minimum bounding box among the multiple edges of the second to-be-identified region, so as to obtain the updated first edge information set and the updated second edge information set.
The second determining module 630 is configured to, when it is determined that the updated first side information set or the updated second side information set is a non-empty set, identify an intersection area between the first to-be-identified area and the second to-be-identified area according to a position relationship between any point in the to-be-identified area corresponding to the non-empty set and another to-be-identified area.
The third determining module 640 is configured to, when it is determined that the updated first side information set and the updated second side information set are both non-empty sets, identify an intersection area between the first and second areas to be identified according to a position relationship between the updated first side information set and the updated second side information set.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any plurality of the first determining module 610, the deleting module 620, the second determining module 630 and the third determining module 640 may be combined and implemented in one module/unit/sub-unit, or any one of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/units/sub-units may be combined with at least part of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the first determining module 610, the deleting module 620, the second determining module 630, and the third determining module 640 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or may be implemented in any one of three implementations of software, hardware, and firmware, or in a suitable combination of any of them. Alternatively, at least one of the first determining module 610, the deleting module 620, the second determining module 630 and the third determining module 640 may be at least partially implemented as a computer program module, which when executed, may perform a corresponding function.
According to an embodiment of the present disclosure, the first determining module 610 includes: the construction unit is used for constructing a rectangular coordinate system by taking any point in the image as an origin; the device comprises a first acquisition unit, a second acquisition unit and a recognition unit, wherein the first acquisition unit is used for acquiring coordinate information of a plurality of edges of an area to be recognized; and a first determining unit, configured to determine, according to the coordinate information of the multiple edges, an edge information set of the to-be-identified region.
According to an embodiment of the present disclosure, the deleting module 620 includes: a second obtaining unit configured to obtain coordinate information of the first and second minimum frames; a second determining unit, configured to determine position information of a target edge of the first edge information set according to coordinate information of a plurality of edges in the first edge information set and coordinate information of the second minimum bounding box, and determine position information of the target edge of the second edge information set according to coordinate information of a plurality of edges in the second edge information set and coordinate information of the first minimum bounding box; and a deleting unit configured to delete the position information of the target edge from the first and second sets of side information.
According to an embodiment of the present disclosure, the second determining module 620 includes: a third determining unit for determining a center point of the second minimum bounding box; the fourth determining unit is used for determining the ray passing through the central point by taking any point in the first to-be-identified area as a starting point; and a fifth determining unit configured to identify an intersection area existing between the first and second areas to be identified, in a case where it is determined that the number of times of intersections of the ray with the plurality of edges of the second area to be identified is odd.
The second determining module 620 further includes a sixth determining unit, configured to add one to the number of intersections of the ray and the multiple edges of the second to-be-identified region when it is determined that two adjacent edges of the vertex in the second region are located on the same side of the ray; and a seventh determining unit configured to identify an intersection area existing between the first and second areas to be identified, in a case where it is determined that the number of times of intersections of the ray with the plurality of sides of the second area to be identified is odd.
According to an embodiment of the present disclosure, the third determining module 630 includes: the eighth determining unit is used for respectively determining the intersection points of the straight lines where the plurality of sides are located in the updated first side information set and the straight lines where the plurality of sides are located in the updated second side set; and a ninth determining unit for identifying an intersection area existing between the first and second areas to be identified, in a case where it is determined that the intersection points are all located on the side.
It should be noted that the image recognition device portion in the embodiment of the present disclosure corresponds to the image recognition method portion in the embodiment of the present disclosure, and the description of the image recognition device portion specifically refers to the image recognition method portion, which is not described herein again.
Fig. 7 schematically shows a block diagram of an electronic device adapted to implement the above described method according to an embodiment of the present disclosure. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, an electronic device 700 according to an embodiment of the present disclosure includes a processor 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. The processor 701 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 701 may also include on-board memory for caching purposes. The processor 701 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 703, various programs and data necessary for the operation of the system 700 are stored. The processor 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. The processor 701 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 702 and/or the RAM 703. It is noted that the programs may also be stored in one or more memories other than the ROM 702 and RAM 703. The processor 701 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the system 700 may also include an input/output (I/O) interface 705, the input/output (I/O) interface 705 also being connected to the bus 704. The system 700 may also include one or more of the following components connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by the processor 701, performs the above-described functions defined in the system of the embodiment of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 702 and/or the RAM 703 and/or one or more memories other than the ROM 702 and the RAM 703 described above.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. An image recognition method, comprising:
respectively determining a first minimum external frame and a first side information set of a first region to be identified and a second minimum external frame and a second side information set of a second region to be identified in the image, wherein the first and second side information sets respectively comprise position information of a plurality of sides of the first and second regions to be identified;
deleting the position information of a target side from the first side information set and the second side information set respectively, wherein the target side comprises a side positioned outside a second minimum external frame in a plurality of sides of the first region to be identified and a side positioned outside the first minimum external frame in a plurality of sides of the second region to be identified, and obtaining an updated first side information set and an updated second side information set;
under the condition that the updated first side information set or the updated second side information set is determined to be a non-empty set, identifying an intersection area between the first to-be-identified area and the second to-be-identified area according to the position relation between any point in the to-be-identified area corresponding to the non-empty set and the other to-be-identified area; and
and under the condition that the updated first side information set and the updated second side information set are both non-empty sets, identifying an intersection area between the first to-be-identified area and the second to-be-identified area according to the position relation of the updated first side information set and the updated second side information set.
2. The method according to claim 1, wherein, in a case that the updated first side information set is a non-empty set, identifying an intersection area between the first to-be-identified area and the second to-be-identified area according to a position relationship between any point in the to-be-identified area corresponding to the non-empty set and another to-be-identified area comprises:
determining a center point of the second smallest circumscribing box;
determining a ray passing through the central point by taking any point in the first region to be identified as a starting point; and
and under the condition that the intersection times of the ray and a plurality of edges of the second region to be identified are determined to be odd, identifying the intersection region existing between the first region to be identified and the second region to be identified.
3. The method according to claim 2, wherein, in a case where the ray passes through a vertex of the second region to be identified, the identifying an intersection region between the first region to be identified and the second region to be identified according to a positional relationship between any point in the region to be identified corresponding to the non-empty set and another region to be identified further comprises:
under the condition that two adjacent edges of the vertex in the second area are located on the same side of the ray, adding one to the intersection times of the ray and the edges of the second area to be identified; and
and under the condition that the intersection times of the ray and a plurality of edges of the second region to be identified are determined to be odd, identifying the intersection region existing between the first region to be identified and the second region to be identified.
4. The method according to claim 1, wherein the identifying the intersection region between the first and second regions to be identified according to the position relationship between the updated first side information set and the updated second side information set comprises:
respectively determining the intersection points of straight lines where a plurality of edges are located in the updated first edge information set and the straight lines where a plurality of edges are located in the updated second edge set; and
and under the condition that the intersection points are determined to be positioned on the edge, identifying an intersection area existing between the first to-be-identified area and the second to-be-identified area.
5. The method of claim 1, wherein determining a set of side information for a region to be identified in an image comprises:
constructing a rectangular coordinate system by taking any point in the image as an origin;
acquiring coordinate information of a plurality of edges of the area to be identified; and
and determining the side information set of the area to be identified according to the coordinate information of the sides.
6. The method of claim 5, wherein said deleting location information of a target edge from the first and second sets of side information, respectively, comprises:
acquiring coordinate information of the first and second minimum external frames;
determining the position information of the target edge of the first edge information set according to the coordinate information of a plurality of edges in the first edge information set and the coordinate information of a second minimum external frame, and determining the position information of the target edge of the second edge information set according to the coordinate information of a plurality of edges in the second edge information set and the coordinate information of the first minimum external frame; and
the position information of the target edge is deleted from the first and second sets of side information.
7. The method according to any of claims 1-6, wherein a plurality of edges of the area to be identified constitute a geometric figure.
8. An image recognition apparatus comprising:
the first determining module is used for respectively determining a first minimum external frame and a first side information set of a first region to be identified and a second minimum external frame and a second side information set of a second region to be identified in the image, wherein the first and second side information sets respectively comprise position information of a plurality of sides of the first region to be identified and the second region to be identified;
a deleting module, configured to delete position information of a target edge from the first and second edge information sets, respectively, where the target edge includes an edge located outside the second minimum bounding box among the multiple edges of the first to-be-identified region, and an edge located outside the first minimum bounding box among the multiple edges of the second to-be-identified region, so as to obtain an updated first edge information set and an updated second edge information set;
the second determining module is used for identifying an intersection area between the first area to be identified and the second area to be identified according to the position relation between any point in the area to be identified corresponding to the non-empty set and another area to be identified under the condition that the updated first side information set or the updated second side information set is determined to be the non-empty set; and
and the third determining module is used for identifying the intersection area between the first to-be-identified area and the second to-be-identified area according to the position relation of the updated first side information set and the updated second side information set under the condition that the updated first side information set and the updated second side information set are both non-empty sets.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-7.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 7.
CN202111168238.1A 2021-09-30 2021-09-30 Image recognition method and device, electronic equipment and readable storage medium Pending CN113901914A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111168238.1A CN113901914A (en) 2021-09-30 2021-09-30 Image recognition method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111168238.1A CN113901914A (en) 2021-09-30 2021-09-30 Image recognition method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113901914A true CN113901914A (en) 2022-01-07

Family

ID=79190132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111168238.1A Pending CN113901914A (en) 2021-09-30 2021-09-30 Image recognition method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113901914A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116206215A (en) * 2023-03-17 2023-06-02 银河航天(北京)网络技术有限公司 Forest land state monitoring method, forest land state monitoring device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116206215A (en) * 2023-03-17 2023-06-02 银河航天(北京)网络技术有限公司 Forest land state monitoring method, forest land state monitoring device and storage medium
CN116206215B (en) * 2023-03-17 2023-09-29 银河航天(北京)网络技术有限公司 Forest land state monitoring method, forest land state monitoring device and storage medium

Similar Documents

Publication Publication Date Title
US11461999B2 (en) Image object detection method, device, electronic device and computer readable medium
CN111582263A (en) License plate recognition method and device, electronic equipment and storage medium
EP3794312B1 (en) Indoor location-based service
US20090144028A1 (en) Method and apparatus of combining mixed resolution databases and mixed radio frequency propagation techniques
US9824494B2 (en) Hybrid surfaces for mesh repair
CN112287430B (en) Building wall generation method and device, computer equipment and storage medium
CN113901914A (en) Image recognition method and device, electronic equipment and readable storage medium
EP3711027B1 (en) System and method for drawing beautification
CN114445825A (en) Character detection method and device, electronic equipment and storage medium
CN113867371B (en) Path planning method and electronic equipment
CN111765892B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
US20230004751A1 (en) Clustering Method and Apparatus for Spatial Points, and Electronic Device
CN112800873A (en) Method, device and system for determining target direction angle and storage medium
US11461971B1 (en) Systems and methods for interactively extrapolating breaklines over surfaces
US11232630B2 (en) Systems and methods for terrain modification at runtime using displacement mapping
CN117649530B (en) Point cloud feature extraction method, system and equipment based on semantic level topological structure
CN110019627B (en) Method, system and computer system for identifying traffic diversion line
US20230045344A1 (en) Generation of synthetic images of abnormalities for training a machine learning algorithm
US20220050210A1 (en) Method, apparatus for superimposing laser point clouds and high-precision map and electronic device
CN114937126A (en) Flattening editing method, device and equipment for quantized grid terrain and storage medium
CN114547366A (en) Area search processing method and device, electronic equipment and storage medium
CN112132885A (en) Image processing method, image processing apparatus, computer device, and medium
CN114549752A (en) Three-dimensional graphic data processing method, device, equipment, storage medium and product
JP2023122614A (en) Vehicle attitude estimating method, apparatus, electronic device, storage medium, and program
CN118070409A (en) Method, electronic device, program product and medium for generating a road rights structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination