CN108564571B - Image area selection method and terminal equipment - Google Patents

Image area selection method and terminal equipment Download PDF

Info

Publication number
CN108564571B
CN108564571B CN201810276670.4A CN201810276670A CN108564571B CN 108564571 B CN108564571 B CN 108564571B CN 201810276670 A CN201810276670 A CN 201810276670A CN 108564571 B CN108564571 B CN 108564571B
Authority
CN
China
Prior art keywords
image
detected
position information
standard
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810276670.4A
Other languages
Chinese (zh)
Other versions
CN108564571A (en
Inventor
孔庆杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Riseye Intelligent Technology Shenzhen Co ltd
Original Assignee
Riseye Intelligent Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Riseye Intelligent Technology Shenzhen Co ltd filed Critical Riseye Intelligent Technology Shenzhen Co ltd
Priority to CN201810276670.4A priority Critical patent/CN108564571B/en
Publication of CN108564571A publication Critical patent/CN108564571A/en
Application granted granted Critical
Publication of CN108564571B publication Critical patent/CN108564571B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, and provides an image area selection method and terminal equipment. The method comprises the following steps: acquiring a standard image, and selecting a preselected area from the standard image according to the pixel characteristics of a target to be detected; extracting and storing the position information of the preselected area in the standard image; acquiring an image to be detected, comparing the image to be detected with the standard image, and adjusting the image to be detected into an image with the direction consistent with that of the standard image; and selecting an image of a region to be detected from the adjusted image to be detected according to the position information. The method and the device can determine the position of the to-be-detected region in the to-be-detected image by utilizing the position information of the pre-selected region in the standard image, and can still accurately select the to-be-detected region in the to-be-detected image according to the position information under the condition that the image characteristics of the to-be-detected image are not obvious, so that the selection accuracy of the to-be-detected region is greatly improved.

Description

Image area selection method and terminal equipment
Technical Field
The invention relates to the technical field of image processing, in particular to an image area selection method and terminal equipment.
Background
In the process of image detection of industrial products, the selection of the region to be detected in the image is a crucial part, and the selection efficiency and accuracy of the region to be detected have important influence on the whole production flow. In the prior art, a region to be detected is usually selected according to image characteristics, for example, corresponding positions in an image are framed according to pixel distribution or a neural network technology, but in the case of an unobvious image characteristic (e.g., uniform pixel distribution and small pixel value difference), the region to be detected is difficult to be accurately selected according to the image characteristics, which easily causes selection errors.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image region selection method and a terminal device, so as to solve the problem that selection errors are easily caused when a region to be detected is selected according to image features under the condition that the image features are not obvious at present.
A first aspect of an embodiment of the present invention provides an image region selection method, including:
acquiring a standard image, and selecting a preselected area from the standard image according to the pixel characteristics of a target to be detected;
extracting and storing the position information of the preselected area in the standard image;
acquiring an image to be detected, comparing the image to be detected with the standard image, and adjusting the image to be detected into an image with the direction consistent with that of the standard image;
and selecting an image of a region to be detected from the adjusted image to be detected according to the position information.
A second aspect of an embodiment of the present invention provides an image area selection apparatus, including:
the acquisition module is used for acquiring a standard image and selecting a preselected area from the standard image according to the pixel characteristics of a target to be detected;
the extraction module is used for extracting and storing the position information of the preselected area in the standard image;
the alignment module is used for acquiring an image to be detected, comparing the image to be detected with the standard image and adjusting the image to be detected into an image with the direction consistent with that of the standard image;
and the processing module is used for selecting the image of the area to be detected from the adjusted image to be detected according to the position information.
A third aspect of the embodiments of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the image area selection method in the first aspect when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the image region selection method in the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: the position information of the preselected region in the standard image is extracted and stored, the image to be detected is compared with the standard image, the image to be detected is adjusted to be an image with the same direction as the standard image, the image of the region to be detected is selected from the adjusted image to be detected according to the stored position information, and the position of the region to be detected in the image to be detected can be determined by utilizing the position information of the preselected region in the standard image. The accuracy of selecting the to-be-detected area depends on the accuracy of the position information, the image characteristics of the standard image are more obvious, and the accuracy of the position information of the pre-selected area can be ensured, so that the to-be-detected area can be accurately selected from the to-be-detected image according to the position information under the condition that the image characteristics of the to-be-detected image are not obvious, and the accuracy of selecting the to-be-detected area is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating an implementation of an image region selection method according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating an implementation of extracting location information in an image region selection method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating an implementation of an image under inspection in an image region selection method according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating an implementation of selecting a to-be-detected region in the image region selection method according to the embodiment of the present invention;
fig. 5 is a flowchart illustrating an implementation of detecting an image of a to-be-detected region in an image region selection method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an image to be inspected before and after adjustment according to an embodiment of the present invention;
FIG. 7 is a diagram of an image area selection apparatus according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 is a flowchart of an implementation of the image region selection method according to the embodiment of the present invention, which is detailed as follows:
in S101, a standard image is acquired, and a preselected region is selected from the standard image according to the pixel characteristics of the object to be inspected.
In this embodiment, the target to be detected is a target product to be detected. The standard image is an image containing an object to be inspected and is used as a reference template, and one standard image can contain one or more objects to be inspected. The pixel characteristics of the object to be inspected are the characteristics of the object to be inspected appearing in the image, and the pixel characteristics of the object to be inspected may include, but are not limited to, one or more of shape characteristics, size characteristics and edge profile characteristics of the object to be inspected. The preselected regions are regions of the object to be inspected in the standard image, and one standard image may include one or more preselected regions, with one-to-one correspondence between the object to be inspected and the preselected regions. The image characteristics of the standard image are obvious, so that the pre-selection area can be accurately selected from the standard image according to the pixel characteristics of the target to be detected.
Optionally, after the standard image is acquired, before the preselected region is selected from the standard image according to the pixel characteristics of the target to be detected, the acquired standard image may be subjected to gray scale processing, and the standard image is converted into a gray scale image, so that the complexity of the standard image pixel is reduced, and the accuracy of the preselected region selection is improved.
In S102, the position information of the preselected area in the standard image is extracted and saved.
In this embodiment, the position information of the preselected region may be extracted from the standard image, and the position information of the preselected region includes the position coordinates of the preselected region in the standard image. The specific parameters of the position coordinates may be set according to the shape of the preselected area. For example, if the preselected area is a rectangle, the parameters of the position coordinates may be vertex coordinates at two ends of any diagonal line of the rectangle, such as a vertex coordinate at the upper left corner and a vertex coordinate at the lower right corner of the rectangle; if the rectangular area is a circle, the parameters of the position coordinate may be the center coordinate of the circle and the radius of the circle.
As an embodiment of the present invention, as shown in fig. 2, S102 may include:
in S201, the position information of the preselected area in the standard image is extracted.
In S202, classification information of the standard image is acquired.
In this embodiment, the standard image may have a plurality of types, and the classification information of the standard image may include a classification number. Optionally, the standard image may be classified according to the viewing angle of the object to be inspected in the standard image, for example, the standard image may be divided into a front view image, a top view image and a side view image; the standard images may be classified according to the type or number of the to-be-detected objects contained in the standard images, for example, the standard images containing the to-be-detected object a may be classified into a-type images, and the images containing the to-be-detected object B may be classified into B-type images.
In S203, the location information is added to the information corresponding to the classification information in the search database.
In this embodiment, the search database is a database containing position information of a preselected area corresponding to each type of standard image. The search database may be a search data table or a search data file. The extracted location information may be saved to a corresponding location in a search database according to classification information of the standard image.
The embodiment stores the position information of the preselected area according to the classification information of the standard images, can make the position information stored more orderly, and stores the position information corresponding to various standard images in the retrieval database, thereby being beneficial to the centralized management and update of the position information.
In S103, an image to be detected is obtained, the image to be detected is compared with the standard image, and the image to be detected is adjusted to be an image with the same direction as the standard image.
In this embodiment, the to-be-detected image is an image that includes the to-be-detected target and needs to be subjected to target detection. The acquired image to be detected may be an oblique image, so that adjustment is needed, a transformation matrix can be calculated according to the pixel characteristics of the image to be detected and the pixel characteristics of the standard image, and the image to be detected is transformed through the transformation matrix, so that the image to be detected is adjusted.
Fig. 6 is a schematic diagram of an image to be inspected before and after adjustment according to an embodiment of the present invention, where fig. 6(a) is the image to be inspected before the adjustment, and fig. 6(b) is the image to be inspected after the adjustment.
Optionally, after the image to be detected is obtained, the image to be detected and the standard image are compared, gray processing can be performed on the image to be detected, the image to be detected is converted into a gray image, the complexity of pixels of the image to be detected is reduced, the difficulty of subsequent processing of the image to be detected is reduced, and therefore the efficiency and the accuracy of selecting the area to be detected are improved.
As an embodiment of the present invention, as shown in fig. 3, S103 may include:
in S301, a first straight line point set in the to-be-detected image and a second straight line point set in the standard image are respectively selected; the first set of straight line points and the second set of straight line points correspond.
In this embodiment, the linear point set is a set of pixels on a certain line in the image. The point sets corresponding to the straight lines can be selected from the image to be detected and the standard image respectively. For example, if the to-be-detected image includes a rectangle, a pixel point set on a long side of the rectangle may be selected as a first straight line point set, a rectangular side corresponding to the long side is searched for in the standard image, and a pixel point set corresponding to the rectangular side is selected as a second straight line point set.
Optionally, a pixel point set of a longest straight line in the to-be-detected image may be selected as the first straight line point set, and a straight line pixel point set corresponding to the straight line may be selected as the second straight line point set in the standard image. The calculation of the transformation matrix is carried out by selecting the point set of the longest straight line, so that the transformation matrix is more accurate, and the alignment effect of the image to be detected is improved.
In S302, a first angle and a second angle are calculated, respectively; the first included angle is an included angle between the first straight line point set and a first preset coordinate axis in the to-be-detected image, and the second included angle is an included angle between the second straight line point set and a second preset coordinate axis in the standard image; the first preset coordinate axis corresponds to the second preset coordinate axis.
In this embodiment, the preset coordinate axis may be a coordinate axis of any angle, such as a horizontal coordinate axis or a vertical coordinate axis. Coordinate axes can be established in the image to be detected, the first straight line point set is fitted to obtain an equation of straight lines corresponding to the first straight line point set, and a first included angle is calculated according to the equation and the first preset coordinate axes. Correspondingly, coordinate axes can be established in the standard image, the second straight line point set is fitted to obtain an equation of a straight line corresponding to the second straight line point set, and a second included angle is calculated according to the equation and the second preset coordinate axis.
In S303, a rotation matrix is determined according to the first included angle and the second included angle, and affine transformation is performed on the image to be detected according to the rotation matrix.
In this embodiment, the angle at which the to-be-detected image needs to be rotated can be calculated by subtracting the first included angle from the second included angle, so as to determine the rotation matrix. The rotation matrix can be expressed as:
Figure BDA0001613779400000061
wherein M represents a rotation matrix, and alpha is an angle at which the image to be detected needs to be rotated.
This embodiment can obtain the rotation matrix according to the contained angle between the straight line point set in waiting to examine the image and predetermine the coordinate axis to and the contained angle that corresponds straight line point set and predetermine the coordinate axis in the standard image, recycle the rotation matrix adjustment and wait to examine the image, thereby guarantee the direction uniformity of the image after the adjustment and standard image, carry out the improvement and wait to detect the accuracy that the regional selected.
In S104, an image of the region to be detected is selected from the adjusted image to be detected according to the position information.
In this embodiment, the position of the region to be detected can be determined in the adjusted image to be detected according to the stored position information, and then the image of the region to be detected is selected.
According to the embodiment of the invention, the position information of the preselected area in the standard image is extracted and stored, the image to be detected is compared with the standard image, the image to be detected is adjusted to be an image with the same direction as the standard image, the image of the area to be detected is selected from the adjusted image to be detected according to the stored position information, and the position of the area to be detected in the image to be detected can be determined by utilizing the position information of the preselected area in the standard image. The accuracy of selecting the to-be-detected area depends on the accuracy of the position information, the image characteristics of the standard image are more obvious, and the accuracy of the position information of the pre-selected area can be ensured, so that the to-be-detected area can be accurately selected from the to-be-detected image according to the position information under the condition that the image characteristics of the to-be-detected image are not obvious, and the accuracy of selecting the to-be-detected area is greatly improved.
As an embodiment of the present invention, as shown in fig. 4, S104 may include:
in S401, the classification information of the image to be examined is acquired.
In S402, the search database is searched for the position information corresponding to the classification information of the image to be detected.
In S403, the to-be-detected region is located in the to-be-detected image according to the found position information.
In S404, an image of the region to be detected is acquired.
In this embodiment, the classification mode of the to-be-detected image is the same as that of the standard image, and the classification information of the to-be-detected image and the standard image of the same class is the same. Therefore, the corresponding position information can be found in the retrieval database according to the classification information of the image to be detected. The position information is the position information of a preselected area in a standard image corresponding to the image to be detected.
According to the image area selection method and device, the position information corresponding to the image to be detected can be quickly and accurately found in the retrieval database according to the classification information of the image to be detected, the selection speed of the area to be detected can be increased, and the image area selection efficiency is improved.
Optionally, the position information corresponding to the classification information of the image to be detected found in the search database may be quickly obtained by reading and writing the file.
As an embodiment of the present invention, as shown in fig. 5, after S104, the method may further include:
in S501, a corresponding relationship between the image of the region to be detected and the image to be detected is established.
In this embodiment, the image to be detected includes at least one region to be detected, and after the region to be detected is determined, the image of each region to be detected can be cut into separate images. The corresponding relation between the image of each region to be detected and the image to be detected comprises the position of each region to be detected in the image to be detected. The division image is obtained by drawing and dividing a part of a boundary by using a pixel difference of an image after graying.
In S502, combining the images of the to-be-detected region according to the established corresponding relation to generate a first image; the relative position of each region to be detected in the first image is consistent with the relative position of each region to be detected in the image to be detected.
In this embodiment, according to the established correspondence, the images of the regions to be detected may be combined to generate a first image, and the other regions except for the region image to be detected in the first image may be filled with a preset image. Wherein the preset image may be a solid color image.
Optionally, adjusting the size of the preset image to make the size of the preset image the same as that of the image to be detected; and adding each area image to be detected to the corresponding position of the preset image according to the established corresponding relation to form a first image. The position of each image of the area to be detected in the first image is the same as the position of each area to be detected in the image to be detected.
In S503, the first image is detected.
The embodiment generates the first image by combining the images of the areas to be detected, and performs detection operation by using the first image instead of the image to be detected, so that the influence of the image outside the area to be detected in the image to be detected on the detection result can be eliminated, the detection accuracy is improved, and the data amount of detection processing can be reduced because the image outside the area to be detected in the image to be detected does not need to be processed, so that the detection efficiency is improved.
Optionally, the images of the regions to be detected can be detected separately, so that a large image (to-be-detected image) is reduced into an image (image of each region to be detected) of a small pixel set of each region to be detected, and the detection speed and the production efficiency can be greatly improved.
According to the embodiment of the invention, the position information of the preselected area in the standard image is extracted and stored, the image to be detected is compared with the standard image, the image to be detected is adjusted to be an image with the same direction as the standard image, the image of the area to be detected is selected from the adjusted image to be detected according to the stored position information, and the position of the area to be detected in the image to be detected can be determined by utilizing the position information of the preselected area in the standard image. The accuracy of selecting the to-be-detected area depends on the accuracy of the position information, the image characteristics of the standard image are more obvious, and the accuracy of the position information of the pre-selected area can be ensured, so that the to-be-detected area can be accurately selected from the to-be-detected image according to the position information under the condition that the image characteristics of the to-be-detected image are not obvious, and the accuracy of selecting the to-be-detected area is greatly improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 7 is a schematic diagram of an application update prompting apparatus according to an embodiment of the present invention, which corresponds to the application update prompting method described in the foregoing embodiment. For convenience of explanation, only the portions related to the present embodiment are shown.
Referring to fig. 7, the apparatus includes an acquisition module 71, an extraction module 72, an adjustment module 73, and a processing module 74.
The acquiring module 71 is configured to acquire a standard image and select a preselected region from the standard image according to a pixel characteristic of the target to be detected.
And the extraction module 72 is used for extracting and saving the position information of the preselected area in the standard image.
And the adjusting module 73 is used for acquiring an image to be detected, comparing the image to be detected with the standard image, and adjusting the image to be detected into an image with the same direction as the standard image.
And the processing module 74 is configured to select an image of the to-be-detected region from the adjusted to-be-detected image according to the position information.
Optionally, the extracting module 72 is configured to:
extracting the position information of the preselected area in the standard image;
acquiring classification information of the standard image;
and adding the position information to information corresponding to the standard image classification information in a retrieval database.
Optionally, the processing module 74 is configured to:
acquiring the classification information of the image to be detected;
searching the retrieval database for the position information corresponding to the classification information of the image to be detected;
positioning the area to be detected in the image to be detected according to the searched position information;
and acquiring an image of the region to be detected.
Optionally, the adjusting module 73 is configured to:
respectively selecting a first straight line point set in the image to be detected and a second straight line point set in the standard image; the first set of straight line points corresponds to the second set of straight line points;
respectively calculating a first included angle and a second included angle; the first included angle is an included angle between the first straight line point set and a first preset coordinate axis in the to-be-detected image, and the second included angle is an included angle between the second straight line point set and a second preset coordinate axis in the standard image; the first preset coordinate axis corresponds to the second preset coordinate axis;
and determining a rotation matrix according to the first included angle and the second included angle, and carrying out affine transformation on the image to be detected according to the rotation matrix.
Optionally, the apparatus may further include a detection module, the detection module being configured to:
establishing a corresponding relation between an image of a region to be detected and the image to be detected;
combining the images of the to-be-detected region according to the established corresponding relation to generate a first image; the relative position of each region to be detected in the first image is consistent with the relative position of each region to be detected in the image to be detected;
and detecting the first image.
According to the embodiment of the invention, the position information of the preselected area in the standard image is extracted and stored, the image to be detected is compared with the standard image, the image to be detected is adjusted to be an image with the same direction as the standard image, the image of the area to be detected is selected from the adjusted image to be detected according to the stored position information, and the position of the area to be detected in the image to be detected can be determined by utilizing the position information of the preselected area in the standard image. The accuracy of selecting the to-be-detected area depends on the accuracy of the position information, the image characteristics of the standard image are more obvious, and the accuracy of the position information of the pre-selected area can be ensured, so that the to-be-detected area can be accurately selected from the to-be-detected image according to the position information under the condition that the image characteristics of the to-be-detected image are not obvious, and the accuracy of selecting the to-be-detected area is greatly improved.
Fig. 8 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 8, the terminal device 8 of this embodiment includes: a processor 80, a memory 81 and a computer program 82, e.g. a program, stored in said memory 81 and executable on said processor 80. The processor 80, when executing the computer program 82, implements the steps in the various method embodiments described above, such as the steps 101 to 104 shown in fig. 1. Alternatively, the processor 80, when executing the computer program 82, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 71 to 74 shown in fig. 7.
Illustratively, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 82 in the terminal device 8. For example, the computer program 82 may be divided into an acquisition module, an extraction module, an adjustment module, and a processing module, and each module specifically functions as follows:
the acquisition module is used for acquiring a standard image and selecting a preselected area from the standard image according to the pixel characteristics of a target to be detected;
the extraction module is used for extracting and storing the position information of the preselected area in the standard image;
the adjusting module is used for acquiring an image to be detected, comparing the image to be detected with the standard image and adjusting the image to be detected into an image with the direction consistent with that of the standard image;
and the processing module is used for selecting the image of the area to be detected from the adjusted image to be detected according to the position information.
The terminal device 8 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 80, a memory 81. Those skilled in the art will appreciate that fig. 8 is merely an example of a terminal device 8 and does not constitute a limitation of terminal device 8 and may include more or fewer components than shown, or some components may be combined, or different components, for example, the terminal device may also include an input-output device, a network access device, a bus, a display, etc.
The Processor 80 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 81 may be an internal storage unit of the terminal device 8, such as a hard disk or a memory of the terminal device 8. The memory 81 may also be an external storage device of the terminal device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the terminal device 8. The memory 81 is used for storing the computer program and other programs and data required by the terminal device. The memory 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. An image region selection method, comprising:
acquiring a standard image, and selecting a preselected area from the standard image according to the pixel characteristics of a target to be detected, wherein the standard image is an image containing the target to be detected, and the target to be detected is a target product to be detected;
extracting and storing the position information of the preselected area in the standard image;
acquiring an image to be detected, comparing the image to be detected with the standard image, and adjusting the image to be detected into an image with the direction consistent with that of the standard image;
and selecting an image of a region to be detected from the adjusted image to be detected according to the position information.
2. The image region selection method according to claim 1, wherein the extracting and storing the position information of the preselected region in the standard image comprises:
extracting the position information of the preselected area in the standard image;
acquiring classification information of the standard image;
and adding the position information to information corresponding to the classification information in a retrieval database.
3. The image region selection method according to claim 2, wherein the selecting the image of the region to be detected from the adjusted image to be detected according to the position information comprises:
acquiring the classification information of the image to be detected;
searching the retrieval database for the position information corresponding to the classification information of the image to be detected;
positioning the area to be detected in the image to be detected according to the searched position information;
and acquiring an image of the region to be detected.
4. The image area selection method according to claim 1, wherein comparing the to-be-detected image with the standard image and adjusting the to-be-detected image to an image whose direction is consistent with that of the standard image comprises:
respectively selecting a first straight line point set in the image to be detected and a second straight line point set in the standard image; the first set of straight line points corresponds to the second set of straight line points;
respectively calculating a first included angle and a second included angle; the first included angle is an included angle between the first straight line point set and a first preset coordinate axis in the to-be-detected image, and the second included angle is an included angle between the second straight line point set and a second preset coordinate axis in the standard image; the first preset coordinate axis corresponds to the second preset coordinate axis;
and determining a rotation matrix according to the first included angle and the second included angle, and carrying out affine transformation on the image to be detected according to the rotation matrix.
5. The image region selection method according to any one of claims 1 to 4, further comprising, after the selecting an image of a region to be detected in the adjusted image to be detected according to the position information:
establishing a corresponding relation between an image of a region to be detected and the image to be detected;
combining the images of the to-be-detected region according to the established corresponding relation to generate a first image; the relative position of each region to be detected in the first image is consistent with the relative position of each region to be detected in the image to be detected;
and detecting the first image.
6. An image area selection apparatus, comprising:
the system comprises an acquisition module, a detection module and a display module, wherein the acquisition module is used for acquiring a standard image and selecting a preselected area from the standard image according to the pixel characteristics of a target to be detected, the standard image is an image containing the target to be detected, and the target to be detected is a target product to be detected;
the extraction module is used for extracting and storing the position information of the preselected area in the standard image;
the alignment module is used for acquiring an image to be detected, comparing the image to be detected with the standard image and adjusting the image to be detected into an image with the direction consistent with that of the standard image;
and the processing module is used for selecting the image of the area to be detected from the adjusted image to be detected according to the position information.
7. The image region selection apparatus according to claim 6, wherein the extraction module is configured to:
extracting the position information of the preselected area in the standard image;
acquiring classification information of the standard image;
and adding the position information to information corresponding to the standard image classification information in a retrieval database.
8. The image region selection apparatus of claim 7, wherein the processing module is configured to:
acquiring the classification information of the image to be detected;
searching the retrieval database for the position information corresponding to the classification information of the image to be detected;
positioning the area to be detected in the image to be detected according to the searched position information;
and acquiring an image of the region to be detected.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN201810276670.4A 2018-03-30 2018-03-30 Image area selection method and terminal equipment Active CN108564571B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810276670.4A CN108564571B (en) 2018-03-30 2018-03-30 Image area selection method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810276670.4A CN108564571B (en) 2018-03-30 2018-03-30 Image area selection method and terminal equipment

Publications (2)

Publication Number Publication Date
CN108564571A CN108564571A (en) 2018-09-21
CN108564571B true CN108564571B (en) 2020-10-16

Family

ID=63533641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810276670.4A Active CN108564571B (en) 2018-03-30 2018-03-30 Image area selection method and terminal equipment

Country Status (1)

Country Link
CN (1) CN108564571B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113039577A (en) * 2020-08-14 2021-06-25 深圳欣锐科技股份有限公司 Product testing method and device, computer readable storage medium and electronic equipment
CN116740385B (en) * 2023-08-08 2023-10-13 深圳探谱特科技有限公司 Equipment quality inspection method, device and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930555A (en) * 2011-08-11 2013-02-13 深圳迈瑞生物医疗电子股份有限公司 Method and device for tracking interested areas in ultrasonic pictures
CN106803244A (en) * 2016-11-24 2017-06-06 深圳市华汉伟业科技有限公司 Defect identification method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11339053A (en) * 1998-05-26 1999-12-10 Matsushita Electric Works Ltd Position detecting device
CN105184781B (en) * 2015-08-26 2018-02-09 清华大学 Method for registering images and device
CN107256262B (en) * 2017-06-13 2020-04-14 西安电子科技大学 Image retrieval method based on object detection
CN107622252B (en) * 2017-09-29 2022-02-22 百度在线网络技术(北京)有限公司 Information generation method and device
CN107766582A (en) * 2017-11-27 2018-03-06 深圳市唯特视科技有限公司 A kind of image search method based on target regional area

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930555A (en) * 2011-08-11 2013-02-13 深圳迈瑞生物医疗电子股份有限公司 Method and device for tracking interested areas in ultrasonic pictures
CN106803244A (en) * 2016-11-24 2017-06-06 深圳市华汉伟业科技有限公司 Defect identification method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FRIP: a region-based image retrieval tool using automatic image segmentation and stepwise Boolean AND matching;ByoungChul Ko等;《IEEE Transactions on Multimedia》;20050124;第7卷(第1期);第105-113页 *

Also Published As

Publication number Publication date
CN108564571A (en) 2018-09-21

Similar Documents

Publication Publication Date Title
CN109886928B (en) Target cell marking method, device, storage medium and terminal equipment
US8422759B2 (en) Image processing method and image processing device
CN108090486B (en) Image processing method and device in billiard game
CN107356213B (en) Optical filter concentricity measuring method and terminal equipment
CN108334879B (en) Region extraction method, system and terminal equipment
CN113920117B (en) Panel defect area detection method and device, electronic equipment and storage medium
CN109685764B (en) Product positioning method and device and terminal equipment
CN108564571B (en) Image area selection method and terminal equipment
CN111311671B (en) Workpiece measuring method and device, electronic equipment and storage medium
CN108052869B (en) Lane line recognition method, lane line recognition device and computer-readable storage medium
CN115482186A (en) Defect detection method, electronic device, and storage medium
CN111259903A (en) Identification table counting method and device, readable storage medium and computer equipment
CN109389628B (en) Image registration method, apparatus and storage medium
CN112966719B (en) Method and device for recognizing instrument panel reading and terminal equipment
CN109801428B (en) Method and device for detecting edge straight line of paper money and terminal
CN112036232A (en) Image table structure identification method, system, terminal and storage medium
CN108510636B (en) Image segmentation method, image segmentation device and terminal equipment
CN108629219B (en) Method and device for identifying one-dimensional code
CN111336938A (en) Robot and object distance detection method and device thereof
CN107734324B (en) Method and system for measuring illumination uniformity of flash lamp and terminal equipment
CN113780278A (en) Method and device for identifying license plate content, electronic equipment and storage medium
CN113963004A (en) Sampling method and device and electronic equipment
CN109712547B (en) Display screen plane brightness measuring method and device, computer equipment and storage medium
CN112084364A (en) Object analysis method, local image search method, device, and storage medium
CN117115275B (en) Distortion parameter determination method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PP01 Preservation of patent right

Effective date of registration: 20201019

Granted publication date: 20201016

PP01 Preservation of patent right
PD01 Discharge of preservation of patent

Date of cancellation: 20201210

Granted publication date: 20201016

PD01 Discharge of preservation of patent