CN110827289B - Method and device for extracting target image in projector definition test - Google Patents

Method and device for extracting target image in projector definition test Download PDF

Info

Publication number
CN110827289B
CN110827289B CN201910950484.9A CN201910950484A CN110827289B CN 110827289 B CN110827289 B CN 110827289B CN 201910950484 A CN201910950484 A CN 201910950484A CN 110827289 B CN110827289 B CN 110827289B
Authority
CN
China
Prior art keywords
corner
acquiring
region
target image
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910950484.9A
Other languages
Chinese (zh)
Other versions
CN110827289A (en
Inventor
赵团伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Optical Technology Co Ltd
Original Assignee
Goertek Optical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Optical Technology Co Ltd filed Critical Goertek Optical Technology Co Ltd
Priority to CN201910950484.9A priority Critical patent/CN110827289B/en
Publication of CN110827289A publication Critical patent/CN110827289A/en
Application granted granted Critical
Publication of CN110827289B publication Critical patent/CN110827289B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • G06T5/70
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3191Testing thereof

Abstract

The invention relates to a method and a device for extracting a target image in a projector definition test. The method comprises the following steps: acquiring a test image projected by a projector, wherein the test image is a checkerboard image with squares inclined, and the corner points in the test image have preset corner point identifications; acquiring angular points according to the angular point identifications; equally dividing the test image into a plurality of regions, and acquiring positioning points corresponding to each region and used for determining the position of the target image according to the angular points in each region; and acquiring a target image corresponding to each region according to the positioning point corresponding to each region and the preset contour size of the target image.

Description

Method and device for extracting target image in projector definition test
Technical Field
The invention relates to the technical field of projector testing, in particular to a method for extracting a target image in projector definition testing, a method for acquiring a test pattern in projector definition testing, a device for extracting the target image in projector definition testing and electronic equipment.
Background
The projector as a screen-less television has entered the life of ordinary families, and with the continuous improvement and development of the network speed, people continuously put forward new requirements for the definition of television pictures, 720p, 1080p, 4K, which all become new standards for selecting televisions.
For such DLP (Digital Light Processing) projectors, the definition thereof needs to be tested at the time of shipment. The DLP person definition testing method is judged through eye sight, the subjectivity is strong, the accuracy is low, the fatigue of people is easily caused, and the efficiency is low.
In order to realize the automatic test of the definition of the DLP projector, an industrial camera can be used for shooting a test image projected by the projector, a target image which can be used for analyzing the definition is extracted from the test image, and the definition of the projector is obtained based on the target image. One difficulty with this automatic test is extracting the target image for sharpness analysis from the images taken by the industrial cameras.
Disclosure of Invention
The invention aims to provide a new technical scheme for extracting a target image in a projector definition test.
According to a first aspect of the present invention, there is provided a method for extracting a target image in a projector sharpness test, including:
acquiring a test image projected by a projector, wherein the test image is a checkerboard image with squares inclined, and angular points in the test image are provided with preset angular point identifications;
acquiring the corner according to the corner mark;
equally dividing the test image into a plurality of regions, and acquiring positioning points corresponding to each region and used for determining the position of a target image according to the angular points in each region;
and acquiring a target image corresponding to each region according to the positioning point corresponding to each region and the preset contour size of the target image.
Optionally, the corner in the test image has a preset corner identifier, and the method includes:
the corner points in the test image have corner point identifications with colors different from the colors of the squares in the checkerboard image;
the acquiring the corner according to the corner identifier includes:
and acquiring the corner points according to the colors of the corner point marks.
Optionally, the obtaining the corner point according to the color of the corner point identifier includes:
filtering the test image according to the color of the corner mark to obtain a mark area corresponding to each corner mark;
and obtaining the corner points according to the gravity center of each identification area.
Optionally, the obtaining, according to the corner points in each of the regions, a locating point corresponding to each of the regions and used for determining a position of the target image includes:
selecting three corner points closest to the center of each region;
and acquiring positioning points for determining the position of the target image according to the three corner points selected in each region.
Optionally, the corner in the test image has a preset corner identifier, including:
three corner points in each region of the test image, which are closest to the center position of the region, are provided with preset corner point identifications;
the selecting three corner points closest to the center position of the region in each region comprises:
and acquiring the three corner points closest to the center of the region according to the corner point identification in each region.
Optionally, the obtaining, according to the three corner points selected in each of the regions, a positioning point for determining a position of the target image includes:
acquiring a positioning connecting line for determining the position of a target image according to the inclination angle of a connecting line formed by three corner points selected in each region;
and acquiring the midpoint of the positioning connecting line as the positioning point.
Optionally, the obtaining a positioning connection line for determining the position of the target image according to an inclination angle of a connection line formed by three corner points selected in each of the regions includes:
acquiring three connecting lines formed by mutually connecting the three angular points;
acquiring an inclination angle of each connecting line relative to the horizontal direction and an inclination angle of each connecting line relative to the vertical direction;
and under the condition that the inclination angle of the connecting line relative to the horizontal direction or the inclination angle of the connecting line relative to the vertical direction is smaller than a preset threshold value, judging that the connecting line is the positioning connecting line.
According to the second aspect of the present invention, there is also provided a method for obtaining a test pattern in a projector definition test, including:
and setting corner point marks for positioning the corner points at the corner points of the checkerboard pattern to obtain the test pattern.
According to a third aspect of the present invention, there is also provided an apparatus for extracting a target image in a projector resolution test, comprising:
the system comprises a test image acquisition module, a projection module and a display module, wherein the test image acquisition module is used for acquiring a test image projected by a projector, the test image is a checkerboard image with squares inclined, and angular points in the test image are provided with preset angular point identifications;
the angular point acquisition module is used for acquiring the angular point according to the angular point identifier;
the positioning point acquisition module is used for equally dividing the test image into a plurality of areas and acquiring a positioning point which is corresponding to each area and is used for determining the position of the target image according to the angular point in each area;
and the target image acquisition module is used for acquiring a target image corresponding to each region according to the positioning point corresponding to each region and the preset contour size of the target image.
According to a fourth aspect of the present invention, there is also provided an electronic device, comprising the apparatus described in the apparatus embodiments of the present invention; alternatively, the electronic device includes:
a memory for storing executable commands;
and the processor is used for executing the method described by the embodiment of the method under the control of the executable command.
The method for extracting the target image in the projector definition test can quickly and accurately extract the target image, and is favorable for realizing the automation of the projector definition test.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a block diagram of a hardware configuration that can be used to implement the extraction method of a target image in a projector definition test according to an embodiment of the present invention.
Fig. 2 is a process flow diagram of a target image extraction method according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a projector projected image.
FIG. 4 is a schematic diagram of a test image according to an embodiment of the present invention.
FIG. 5 is a schematic diagram of a test image detail according to an embodiment of the invention.
Fig. 6 is a diagram illustrating the filtering result according to an embodiment of the present invention.
FIG. 7 is a diagram illustrating test image segmentation results according to an embodiment of the present invention.
Fig. 8 is a schematic diagram of a corner point selection result according to an embodiment of the present invention.
Fig. 9 is a schematic diagram of determining a positioning connection according to an embodiment of the present invention.
Fig. 10 is a schematic diagram of a target image extraction result of a certain area according to an embodiment of the present invention.
Fig. 11 is a schematic diagram of an overall target image extraction result according to an embodiment of the present invention.
Fig. 12 is a schematic diagram of a target image extraction device according to an embodiment of the present invention.
FIG. 13 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< hardware configuration >
Fig. 1 is a block diagram of a hardware configuration of an electronic device that can be used to implement the method of extracting a target image in a projector definition test according to any of the embodiments of the present invention. .
The electronic device 1000 may be a mobile phone, a laptop, a tablet computer, a palmtop computer, etc.
The electronic device 1000 may include a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, a display device 1500, an input device 1600, a speaker 1700, a microphone 1800, and so forth. The processor 1100 may be a central processing unit CPU, a microprocessor MCU, or the like. The memory 1200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 1300 includes, for example, a USB interface, a headphone interface, and the like. Communication device 1400 is capable of wired or wireless communication, for example. The display device 1500 is, for example, a liquid crystal display panel, a touch panel, or the like. The input device 1600 may include, for example, a touch screen, a keyboard, and the like. A user can input/output voice information through the speaker 1700 and the microphone 1800.
Although a plurality of devices are shown in fig. 1 for each of the electronic devices 1000, the present invention may relate to only some of the devices, for example, the electronic device 1000 may relate to only the memory 1200 and the processor 1100.
In an embodiment of the present invention, the memory 1200 of the electronic device 1000 is used for storing instructions, and the instructions are used for controlling the processor 1100 to execute the method for extracting the target image in the projector definition test provided by the embodiment of the present invention.
In the above description, the skilled person will be able to design instructions in accordance with the disclosed solution. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
< method example >
Fig. 2 is a process flow diagram of a method for extracting a target image in a projector sharpness test according to an embodiment of the present invention. The method of extracting the target image is implemented by the electronic apparatus 1000 shown in fig. 1, for example.
As shown in fig. 2, the method of extracting the target image may include the following steps S2100 to S2400.
Step S2100 obtains a test image projected by the projector, where the test image is a checkerboard image with squares inclined, and corners in the test image have preset corner identifiers.
In one embodiment of the present invention, acquiring a test image projected by a projector includes: shooting a test image projected by a projector through an industrial camera; and denoising the shot test image.
Fig. 3 shows a schematic diagram of a projector projecting an image.
According to fig. 3, the projector projects an image onto the projection screen ABCD, the projection image area is A1B1C1D1, and the projector lens is parallel to the plane of the projection screen.
The industrial camera is installed in the tool fixture and used for shooting a projection image projected by the projector onto the projection curtain. The optical axis of the optical component of the industrial camera is perpendicular to the plane of the projection curtain.
The electronic equipment is used for controlling the industrial camera to take a picture and acquiring a projection image collected by the industrial camera for processing.
The industrial camera is used for shooting the image projected by the projector, and the shot picture is transmitted to the electronic equipment, so that the electronic equipment obtains the image projected by the projector.
In this embodiment, the test image is a checkerboard image, and the squares in the checkerboard image have a certain inclination angle with respect to the horizontal direction or the vertical direction. Such test images are useful for obtaining accurate sharpness test results.
The test image in this embodiment is a checkerboard image as shown in fig. 4. The checkerboard image shown in fig. 4 is a black-and-white checkerboard image, in which the black squares have a certain inclination angle with respect to the horizontal direction or the numerical direction, and the inclination directions of two adjacent rows or two adjacent columns of black squares are opposite, that is, the squares in the checkerboard image are alternately inclined.
Fig. 5 shows details of the checkerboard image in the present embodiment. Wherein, A1-A7 respectively represent a black square. The squares represented by a1, A3, a5, and a7 are rotated counterclockwise by a set angle with respect to the non-tilt position, and the squares represented by a2, a4, and a6 are rotated clockwise by a set angle with respect to the non-tilt position. Two adjacent squares (e.g., the squares represented by a1 and a 2) coincide at the apex.
In one example, the set angle is in the range of 4 ° to 10 °, preferably 7 °.
In one example, the test image is a centrosymmetric pattern.
In one example, the pixel size of the test image is 4096 × 2160, so that the definition test requirement of the 4K projector can be satisfied.
In one example, the test image is in the form of a bitmap, which ensures that scaling of the test image does not change the related information, thereby adapting to different resolutions.
During denoising, each color channel of the test image can be filtered according to a preset threshold value so as to reduce interference. The image area can be screened according to the number of the pixel points in the connected area of the test image, so that the influence of noise points is avoided.
A corner point usually refers to an extreme point, i.e. a point where a property is particularly prominent in some respect, e.g. the intersection of two lines. The corner points in this embodiment are the vertices of squares in the checkerboard image.
In this embodiment, the angles in the test image have preset corner point identifiers. For example, in the test image shown in fig. 4, square markers are provided at the corner points of a checkerboard. By setting the corner mark, the corner can be rapidly and accurately acquired.
Step S2200, acquiring the corner points according to the corner point marks.
In this embodiment, the corner point identifier may be a color identifier, that is, the color of the corner point identifier is different from the color of the checkerboard image, and the corner point is obtained according to the color information of the test image.
In one example, the ground color of the test image is white with an RGB value of (255, 255, 255), the squares in the test image are black with an RGB value of (0, 0, 0), the corner points are identified as square squares and the color is green with an RGB value of (0, 255, 0).
In this embodiment, the corner point identifier may also be a shape identifier, that is, the shape of the corner point identifier is different from the shape of the checkerboard image, and the corner point is obtained according to the shape information of the test image.
In one embodiment of the invention, the corner point identification is a color identification. Correspondingly, acquiring the corner according to the corner mark, comprising: and acquiring the corner points according to the colors of the corner point identifications.
In an embodiment of the present invention, obtaining a corner point according to a color of a corner point identifier includes: filtering the test image according to the color of the corner mark to obtain a mark area corresponding to each corner mark; and acquiring angular points according to the gravity center of each identification area.
In one example, an auxiliary image is created, the pixels in the auxiliary image correspond to the pixels in the test image captured by the camera in a one-to-one manner, and the pixels in the auxiliary image are all white in the initial state. Traversing each pixel in a test image captured by a camera, the RGB values of each pixel are detected based on predetermined conditions, such as R <10 and G >170 and B < 10. And setting the pixel in the auxiliary image corresponding to the pixel as black for the pixel meeting the preset condition. The results after the treatment are shown in FIG. 6. Each black area in fig. 6 is a mark area corresponding to each corner mark. And acquiring the gravity center of each black area, wherein the gravity center position is the position of the corresponding corner point.
Step S2300, equally dividing the test image into a plurality of regions, and obtaining positioning points corresponding to each region for determining the position of the target image according to the corner points in each region.
In this embodiment, the test image is equally divided into a plurality of regions, and each region corresponds to one target image.
In one example, as shown in fig. 7, the test image is uniformly divided into 16 regions by three horizontal-direction broken lines and three numerical-direction broken lines. The 16 regions correspond to 16 target images.
In an embodiment of the present invention, acquiring, according to the corner points in each region, a positioning point corresponding to each region and used for determining the position of the target image includes: selecting three corner points closest to the center of each region; and acquiring positioning points for determining the position of the target image according to the three corner points selected in each region.
In one example, selecting three corner points in each region closest to the center of the region includes: acquiring a central point of each area; and obtaining three corner points closest to the center position of the region in each region according to the distance between the corner point in each region and the center point.
In one embodiment of the invention, three corner points in each region of the test image, which are closest to the center position of the region, have preset corner point identifications. Correspondingly, selecting three corner points closest to the center position of each region, including: and acquiring three corner points closest to the center of the region according to the corner point identification in each region.
In this embodiment, corner point identifiers are only set at three corner points closest to the center of each region. It is easily understood that in this case, the three corner points closest to the center position of the region in each region are directly obtained from the corner point identification. The three corner points corresponding to a certain region are shown as P1, P2 and P3 in fig. 8.
In an embodiment of the present invention, acquiring a positioning point for determining a position of a target image according to three corner points selected in each region includes: acquiring a positioning connecting line for determining the position of a target image according to the inclination angle of a connecting line formed by three corner points selected in each region; and acquiring the midpoint of the positioning connecting line as a positioning point.
In an embodiment of the present invention, acquiring a positioning connection line for determining a position of a target image according to an inclination angle of a connection line formed by three corner points selected in each region includes: acquiring three connecting lines formed by mutually connecting three angular points; acquiring an inclination angle of each connecting line relative to the horizontal direction and an inclination angle of each connecting line relative to the vertical direction; and under the condition that the inclination angle of the connecting line relative to the horizontal direction or the inclination angle of the connecting line relative to the vertical direction is smaller than a preset threshold value, judging that the connecting line is a positioning connecting line.
As an example, referring to fig. 9, the points P1, P2, P3 shown in fig. 9 correspond to the three corner points P1, P2, P3 in fig. 8. In fig. 9, three corner points P1, P2 and P3 are connected to form three connecting lines P1P2, P1P3 and P2P 3. The dotted line in fig. 9 indicates the horizontal direction or the vertical direction. The inclination angle (angle) of each link with respect to the horizontal direction and the inclination angle (angle) of each link with respect to the vertical direction are calculated, for example, the inclination angle of the link P1P2 with respect to the vertical direction is calculated to be α 1, and the inclination angle of the link P2P3 with respect to the horizontal direction is calculated to be α 2. The inclination angle of each link is compared with a preset threshold, for example 10 °. It is easy to understand that the inclination angle α 1 of the connecting line P1P2 with respect to the vertical direction is smaller than the preset threshold, the inclination angle α 2 of the connecting line P2P3 with respect to the horizontal direction is smaller than the preset threshold, and both the inclination angle of the connecting line P1P3 with respect to the horizontal direction and the inclination angle with respect to the vertical direction are larger than the preset thresholds, so that the connecting line P1P2 and the connecting line P2P3 can be determined as the positioning connecting line.
The midpoints of the connecting line P1P2 and the connecting line P1P3 are obtained respectively, that is, the positioning points (two in number) corresponding to the region shown in fig. 8 are obtained.
Step S2400, obtaining a target image corresponding to each region according to the positioning point corresponding to each region and a preset target image contour size.
In this embodiment, each region in the test image corresponds to two positioning points, and the target image corresponding to each region includes two square regions centered on the two positioning points, respectively. The outline size (e.g., the side length of the square) of the square region is the outline size of the target image.
In this embodiment, the target image contour size may be represented by a proportional relationship between the target image contour size and the checkerboard size, the target image contour size may be represented by the number of pixels of the target image contour, and the target image contour size may be represented by the geometric length of the target image contour.
Fig. 10 shows the target image corresponding to the region shown in fig. 8. O1 and O2 in FIG. 10 are two positioning points corresponding to the region, respectively, where O1 is the midpoint of the connecting line P1P2, and O2 is the midpoint of the connecting line P2P 3. Assume that the contour size of the target image is represented as w-0.6 × d, where w is the side length of the square region included in the target image, and d is the side length of the checkerboard grid.
According to the positioning points O1 and O2 and the contour size w of the target image, two square regions shown by a dashed box in fig. 10 can be obtained, and the union of the two square regions is the target image corresponding to the region shown in fig. 8.
As shown in fig. 11, all the regions correspond to the union of the target images, i.e., the target images corresponding to the whole test image.
And according to the target image corresponding to the whole test image, the definition of the projector can be analyzed and calculated.
The method for extracting the target image in the projector definition test can quickly and accurately extract the target image, and is favorable for realizing the automation of the projector definition test.
A specific example of the implementation of the target image acquiring method provided by the present embodiment is provided below. A checkerboard image with a grid inclination shown in fig. 4 is projected by a projector to be measured, wherein the angle of the grid inclination is 7 degrees, and three corner points closest to the center of each region in sixteen equal regions are provided with green corner point identifications. And shooting an image projected by the projector by using the industrial camera, and sending the acquired image to the electronic equipment. The electronic device performs denoising processing on the received image, and performs filtering based on the RGB channels to obtain corner points in the test image, with the filtering result shown in fig. 6. Thereafter, the electronic device uniformly divides the test image into 16 regions shown in fig. 7, and performs the following processing for each region:
taking the region shown in fig. 8 as an example, connecting lines P1P2, P2P3 and P1P3 formed by three corner points P1, P2 and P3 and P1, P2 and P3 of the region are obtained. And calculating the angles of the connecting lines P1P2, P2P3 and P1P3 relative to the horizontal direction and the numerical direction, and comparing the angles with a preset threshold value of 10 degrees, thereby judging that the connecting lines are positioned to be P1P2 and P2P 3. And acquiring the midpoint of the P1P2 and the P2P3 as an anchor point, and combining the preset contour size of the target image, namely the side length w of the square region, to obtain two square regions shown by a dotted line in FIG. 10, wherein the union of the two square regions is used as the target image corresponding to the region.
As shown in fig. 11, all the regions correspond to the union of the target images, that is, the target image corresponding to the whole test image.
The embodiment also provides a method for acquiring a test pattern in a projector definition test, which includes: and setting corner point identifiers for positioning corner points at the corner points of the checkerboard pattern to obtain the test pattern.
In one embodiment, the squares in the checkerboard pattern are staggered, such as the checkerboard pattern shown in fig. 4.
In one embodiment, the corner point mark is a color mark, for example, a region with a color different from the checkered black and white background color in fig. 4. The color mark can have a set shape of a square, a circle, and the like. In one example, the corner point is identified as green, and its RGB value is (0, 255, 0).
After the test pattern is obtained by the above method, the test pattern may be applied to a projector sharpness test, that is, the above steps S2100 to S2400 are performed, where the test image is an influence of the test pattern.
< apparatus embodiment >
The present embodiment provides an apparatus for extracting a target image in a projector resolution test, which is, for example, the target image extracting apparatus 120 shown in fig. 12. Referring to fig. 12, the target image extracting apparatus 120 includes a test image acquiring module 121, a corner acquiring module 122, an anchor point acquiring module 123, and a target image acquiring module 124.
The test image obtaining module 121 is configured to obtain a test image projected by the projector, where the test image is a checkerboard image with squares inclined, and corners in the test image have preset corner identifiers.
The corner point obtaining module 122 is configured to obtain a corner point according to the corner point identifier.
The positioning point obtaining module 123 is configured to equally divide the test image into a plurality of regions, and obtain a positioning point corresponding to each region and used for determining the position of the target image according to the corner point in each region.
The target image obtaining module 124 is configured to obtain a target image corresponding to each region according to the positioning point corresponding to each region and a preset target image contour size.
In an embodiment of the present invention, the corner in the test image has a preset corner identifier, including: the corner points in the test image have corner point identifications colored differently from the colors of the squares in the checkerboard image. The corner point obtaining module 122, when obtaining a corner point according to the corner point identifier, is further configured to: and acquiring the corner points according to the colors of the corner point marks.
In an embodiment of the present invention, when the corner point obtaining module 122 obtains a corner point according to a color identified by the corner point, the corner point obtaining module is further configured to: filtering the test image according to the color of the corner mark to obtain a mark area corresponding to each corner mark; and acquiring angular points according to the gravity center of each identification area.
In an embodiment of the present invention, when the anchor point obtaining module 123 obtains an anchor point corresponding to each region and used for determining the position of the target image according to the corner point in each region, the anchor point obtaining module is further configured to: selecting three corner points closest to the center of each region; and acquiring positioning points for determining the position of the target image according to the three corner points selected in each region.
In an embodiment of the present invention, the corner in the test image has a preset corner identifier, including: and three corner points closest to the center of the region in each region of the test image have preset corner point identification. The positioning point obtaining module 123, when selecting three corner points in each region that are closest to the center of the region, is further configured to: and acquiring three corner points closest to the center of the region according to the corner point identification in each region.
In an embodiment of the present invention, when the anchor point obtaining module 123 obtains an anchor point for determining the position of the target image according to three corner points selected in each region, the anchor point obtaining module is further configured to: acquiring a positioning connecting line for determining the position of a target image according to the inclination angle of a connecting line formed by three corner points selected in each region; and acquiring the midpoint of the positioning connecting line as a positioning point.
In an embodiment of the present invention, the positioning point obtaining module 123, when obtaining a positioning connection line for determining the position of the target image according to an inclination angle of a connection line formed by three corner points selected in each region, is further configured to: acquiring three connecting lines formed by mutually connecting three angular points; acquiring an inclination angle of each connecting line relative to the horizontal direction and an inclination angle of each connecting line relative to the vertical direction; and under the condition that the inclination angle of the connecting line relative to the horizontal direction or the inclination angle of the connecting line relative to the vertical direction is smaller than a preset threshold value, judging that the connecting line is a positioning connecting line.
< electronic device embodiment >
This embodiment provides an electronic device comprising the apparatus described in the apparatus embodiment of the present invention. Alternatively, the electronic device is, for example, the electronic device 131 shown in fig. 13.
Referring to fig. 13, the electronic device 130 includes a memory 131 and a processor 132.
The memory 131 is used to store executable commands. A
Processor 132 is configured to execute the methods described in the method embodiments of the present invention under the control of executable commands.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be interpreted as a transitory signal per se, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or an electrical signal transmitted through an electrical wire.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (6)

1. A method for extracting a target image in a projector definition test comprises the following steps:
acquiring a test image projected by a projector, wherein the test image is a checkerboard image with inclined checks, and the corner in the test image has a preset corner mark;
acquiring the corner according to the corner mark;
equally dividing the test image into a plurality of regions, and acquiring positioning points corresponding to each region and used for determining the position of a target image according to the angular points in each region;
acquiring a target image corresponding to each region according to the positioning point corresponding to each region and the preset contour size of the target image,
wherein, the obtaining of the positioning point corresponding to each region and used for determining the position of the target image according to the corner points in each region comprises:
selecting three corner points closest to the center of each region;
acquiring positioning points for determining the position of the target image according to the three corner points selected in each region,
wherein, the acquiring a positioning point for determining the position of the target image according to the three corner points selected in each region comprises:
acquiring a positioning connecting line for determining the position of a target image according to the inclination angle of a connecting line formed by three corner points selected in each region;
acquiring the midpoint of the positioning connecting line as the positioning point,
wherein, the obtaining of the positioning connecting line for determining the position of the target image according to the inclination angle of the connecting line formed by the three corner points selected in each region comprises:
acquiring three connecting lines formed by mutually connecting the three angular points;
acquiring an inclination angle of each connecting line relative to the horizontal direction and an inclination angle of each connecting line relative to the vertical direction;
and under the condition that the inclination angle of the connecting line relative to the horizontal direction or the inclination angle of the connecting line relative to the vertical direction is smaller than a preset threshold value, judging that the connecting line is the positioning connecting line.
2. The method of claim 1, wherein the corner points in the test image have a preset corner point identification, comprising:
the corner points in the test image have corner point identifications with colors different from the colors of the squares in the checkerboard image;
the acquiring the corner according to the corner identifier includes:
and acquiring the corner points according to the colors of the corner point marks.
3. The method according to claim 2, wherein said obtaining the corner points according to the color of the corner point identifier comprises:
filtering the test image according to the color of the corner mark to obtain a mark area corresponding to each corner mark;
and obtaining the corner points according to the gravity center of each identification area.
4. The method of claim 1, wherein the corner points in the test image have a preset corner point identification, comprising:
three corner points in each region of the test image, which are closest to the center position of the region, are provided with preset corner point identifications;
the selecting three corner points closest to the center of each region includes:
and acquiring the three corner points closest to the center of the region according to the corner point identification in each region.
5. An extraction apparatus of a target image in a projector resolution test, comprising:
the system comprises a test image acquisition module, a projection module and a display module, wherein the test image acquisition module is used for acquiring a test image projected by a projector, the test image is a checkerboard image with squares inclined, and angular points in the test image are provided with preset angular point identifications;
the angular point acquisition module is used for acquiring the angular point according to the angular point identifier;
the positioning point acquisition module is used for equally dividing the test image into a plurality of areas and acquiring a positioning point which is corresponding to each area and is used for determining the position of the target image according to the angular point in each area;
a target image obtaining module for obtaining a target image corresponding to each of the regions according to a positioning point corresponding to each of the regions and a preset contour size of the target image,
the anchor point obtaining module is further configured to, when obtaining an anchor point corresponding to each of the regions and used for determining a position of a target image according to the corner point in each of the regions:
selecting three corner points closest to the center of each region;
acquiring positioning points for determining the position of the target image according to the three corner points selected in each region,
the positioning point obtaining module is further configured to, when obtaining a positioning point for determining a position of a target image according to the three corner points selected in each of the regions:
acquiring a positioning connecting line for determining the position of a target image according to the inclination angle of a connecting line formed by three corner points selected in each region;
acquiring the midpoint of the positioning connecting line as the positioning point,
the positioning point obtaining module is further configured to, when obtaining a positioning connection line for determining a position of the target image according to an inclination angle of a connection line formed by three corner points selected in each of the regions:
acquiring three connecting lines formed by mutually connecting the three angular points;
acquiring an inclination angle of each connecting line relative to the horizontal direction and an inclination angle of each connecting line relative to the vertical direction;
and under the condition that the inclination angle of the connecting line relative to the horizontal direction or the inclination angle of the connecting line relative to the vertical direction is smaller than a preset threshold value, judging that the connecting line is the positioning connecting line.
6. An electronic device comprising the apparatus of claim 5; alternatively, the electronic device includes:
a memory for storing executable commands;
a processor for performing the method of any one of claims 1-4 under the control of executable commands.
CN201910950484.9A 2019-10-08 2019-10-08 Method and device for extracting target image in projector definition test Active CN110827289B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910950484.9A CN110827289B (en) 2019-10-08 2019-10-08 Method and device for extracting target image in projector definition test

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910950484.9A CN110827289B (en) 2019-10-08 2019-10-08 Method and device for extracting target image in projector definition test

Publications (2)

Publication Number Publication Date
CN110827289A CN110827289A (en) 2020-02-21
CN110827289B true CN110827289B (en) 2022-06-14

Family

ID=69548684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910950484.9A Active CN110827289B (en) 2019-10-08 2019-10-08 Method and device for extracting target image in projector definition test

Country Status (1)

Country Link
CN (1) CN110827289B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104202547A (en) * 2014-08-27 2014-12-10 广东威创视讯科技股份有限公司 Method for extracting target object in projection picture, projection interaction method and system thereof
CN105301884A (en) * 2015-11-13 2016-02-03 神画科技(深圳)有限公司 Method and system for automatic focusing on multi-point reference image recognition
WO2017122500A1 (en) * 2016-01-13 2017-07-20 株式会社リコー Projection system, image processing device, projection method, and program
CN108074237A (en) * 2017-12-28 2018-05-25 广东欧珀移动通信有限公司 Approach for detecting image sharpness, device, storage medium and electronic equipment
CN108734743A (en) * 2018-04-13 2018-11-02 深圳市商汤科技有限公司 Method, apparatus, medium and electronic equipment for demarcating photographic device
CN110087049A (en) * 2019-05-27 2019-08-02 广州市讯码通讯科技有限公司 Automatic focusing system, method and projector
CN110177264A (en) * 2019-06-03 2019-08-27 歌尔股份有限公司 Clarity detection method and detection device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104202547A (en) * 2014-08-27 2014-12-10 广东威创视讯科技股份有限公司 Method for extracting target object in projection picture, projection interaction method and system thereof
CN105301884A (en) * 2015-11-13 2016-02-03 神画科技(深圳)有限公司 Method and system for automatic focusing on multi-point reference image recognition
WO2017122500A1 (en) * 2016-01-13 2017-07-20 株式会社リコー Projection system, image processing device, projection method, and program
CN108074237A (en) * 2017-12-28 2018-05-25 广东欧珀移动通信有限公司 Approach for detecting image sharpness, device, storage medium and electronic equipment
CN108734743A (en) * 2018-04-13 2018-11-02 深圳市商汤科技有限公司 Method, apparatus, medium and electronic equipment for demarcating photographic device
CN110087049A (en) * 2019-05-27 2019-08-02 广州市讯码通讯科技有限公司 Automatic focusing system, method and projector
CN110177264A (en) * 2019-06-03 2019-08-27 歌尔股份有限公司 Clarity detection method and detection device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于投影图像自适应校正方法的智能投影系统研宄;朱博;《中国优秀博硕士学位论文全文数据库(博士)》;20140615(第06期);全文 *

Also Published As

Publication number Publication date
CN110827289A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN110827288B (en) Method and device for extracting target image in projector definition test
KR101121034B1 (en) System and method for obtaining camera parameters from multiple images and computer program products thereof
CN110717920B (en) Method and device for extracting target image of projector galvanometer test and electronic equipment
CN112272292B (en) Projection correction method, apparatus and storage medium
CN113365041A (en) Projection correction method, projection correction device, storage medium and electronic equipment
US10931933B2 (en) Calibration guidance system and operation method of a calibration guidance system
US10148944B2 (en) Calibration method of an image capture system
US9030553B2 (en) Projector image correction device and method
CN110351540B (en) Method and device for extracting image of vibrating mirror test unit of projector and electronic equipment
US10469812B2 (en) Projection display system, information processing apparatus, information processing method, and storage medium therefor
US20180124378A1 (en) Enhanced depth map images for mobile devices
US9196051B2 (en) Electronic equipment with image analysis function and related method
JP2014197243A (en) Pattern processor, pattern processing method and pattern processing program
US9336607B1 (en) Automatic identification of projection surfaces
CN111031311A (en) Imaging quality detection method and device, electronic equipment and readable storage medium
US20190394435A1 (en) Apparatus and method for image processing and storage medium storing program for the same
CN110809141A (en) Trapezoidal correction method and device, projector and storage medium
WO2017179111A1 (en) Display system and information processing method
CN110827289B (en) Method and device for extracting target image in projector definition test
CN110769225B (en) Projection area obtaining method based on curtain and projection device
WO2021145913A1 (en) Estimating depth based on iris size
US9581439B1 (en) Image capture device with a calibration function and calibration method of an image capture device
CN113810673B (en) Projector uniformity testing method and device and computer readable storage medium
CN112261394A (en) Method, device and system for measuring deflection rate of galvanometer and computer storage medium
KR101997743B1 (en) Apparatus and method for implementing an invisible effect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201012

Address after: 261031 north of Yuqing street, east of Dongming Road, high tech Zone, Weifang City, Shandong Province (Room 502, Geer electronic office building)

Applicant after: GoerTek Optical Technology Co.,Ltd.

Address before: 261031 Dongfang Road, Weifang high tech Development Zone, Shandong, China, No. 268

Applicant before: GOERTEK Inc.

GR01 Patent grant
GR01 Patent grant