CN107885180B - Test apparatus and test method - Google Patents
Test apparatus and test method Download PDFInfo
- Publication number
- CN107885180B CN107885180B CN201610875978.1A CN201610875978A CN107885180B CN 107885180 B CN107885180 B CN 107885180B CN 201610875978 A CN201610875978 A CN 201610875978A CN 107885180 B CN107885180 B CN 107885180B
- Authority
- CN
- China
- Prior art keywords
- test
- image
- human
- machine interface
- interest
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B23/00—Testing or monitoring of control systems or parts thereof
- G05B23/02—Electric testing or monitoring
- G05B23/0205—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
- G05B23/0208—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the configuration of the monitoring system
- G05B23/0213—Modular or universal configuration of the monitoring system, e.g. monitoring system having modules that may be combined to build monitoring program; monitoring system that can be applied to legacy systems; adaptable monitoring system; using different communication protocols
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B23/00—Testing or monitoring of control systems or parts thereof
- G05B23/02—Electric testing or monitoring
- G05B23/0205—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
- G05B23/0208—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the configuration of the monitoring system
- G05B23/0216—Human interface functionality, e.g. monitoring system providing help to the user in the selection of tests or in its configuration
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a test apparatus and a test method. The test apparatus includes: an image acquisition device configured to acquire an image of a human-machine interface as a test object; an image processing device configured to process the image acquired by the image acquisition device to obtain information related to the test; a result determination device configured to determine a result of the test based on the information related to the test obtained by the image processing device. Therefore, the test efficiency can be improved, and errors in manually determining the test result can be reduced or prevented.
Description
Technical Field
The invention relates to a test apparatus and a test method.
Background
In testing industrial equipment in testing a human-machine interface such as a dashboard of a numerical control machine tool, an operator needs to operate the human-machine interface as a test object according to a predetermined scheme and then determine whether an actual output of the test object coincides with a desired output. Determining to pass the test when the actual output is consistent with the expected output; when the actual output does not coincide with the expected output, it is determined that the test failed.
For example, when testing a human-machine interface of an industrial device such as a numerical control machine tool, a driver, etc., an operator needs to perform a plurality of tests, and needs to view information or an image displayed by the human-machine interface after each test is completed and compare the information or the image with the information or the image which is expected to be displayed to determine whether the test is passed. Therefore, the testing of the human-machine interface is inefficient, and such manual comparison to determine whether the test is passed may cause errors in the process.
Disclosure of Invention
An object of the exemplary embodiments is to address at least the above and/or other technical problems and to provide a test apparatus and a test method that can improve test efficiency and reduce or prevent a test result determination error.
In one embodiment, a test apparatus may be provided, the test apparatus comprising: an image acquisition device configured to acquire an image of a human-machine interface; an image processing device configured to process the image acquired by the image acquisition device to obtain information related to the test; a result determination device configured to determine a result of the test based on the information related to the test obtained by the image processing device. Accordingly, the test result may be automatically determined, so that test efficiency may be improved and errors caused by manually determining the test result may be reduced or prevented.
The image acquisition device comprises a camera for taking pictures of the human-machine interface. Therefore, the test apparatus can have a simple structure. In addition, the camera may be fixed relative to the human-machine interface, thereby leaving the human-machine interface in a fixed position in the photograph to facilitate further processing of the photograph at a later time.
The image processing apparatus includes: a region-of-interest determination unit configured to determine at least one region of interest showing information related to the test in the image acquired by the image acquisition means; an information identification unit configured to identify information related to the test from the at least one region of interest, respectively. Accordingly, it is possible to selectively perform recognition in the image to obtain desired information.
The image processing apparatus further includes: and an image dividing unit configured to divide the image acquired by the image acquisition device into at least one sub-image respectively corresponding to the at least one region of interest, wherein the sub-image includes a portion of the image acquired by the image acquisition device that is located in the corresponding region of interest, and wherein the information identifying unit identifies the test-related information from the at least one sub-image respectively. Accordingly, image recognition can be performed only on the sub-image, so that the efficiency and accuracy of image recognition can be improved.
The image acquisition device is configured to acquire a pre-test image of the human-machine interface before the test is performed, and the region-of-interest determination unit is configured to determine a region of interest in the pre-test image and obtain position information of the region of interest in the pre-test image. The image acquisition device is further configured to acquire a post-test image of the human-machine interface after the completion of the test, and the image segmentation unit is configured to divide the post-test image into sub-images corresponding to the region of interest according to position information of the region of interest in the pre-test image. Therefore, it is possible to complete obtaining the position information of the region of interest before the test and then to use the obtained position information after the test is ended, thereby shortening the time required to confirm the test result.
The test related information includes information displayed by the human machine interface when the human machine interface completes the test. The result determination means is configured to compare the test-related information with an expected output of the human-machine interface, the result determination means determining that the test is passed when the test-related information coincides with the expected output of the human-machine interface, and the result determination means determining that the test is not passed when the test-related information differs from the expected output of the human-machine interface. Thus, it is possible to automatically determine whether the test is passed.
The test apparatus further comprises: a test control device configured to provide a test protocol to the human-machine interface to cause the human-machine interface to operate in accordance with the test protocol to test the human-machine interface, wherein the test protocol includes a desired output of the human-machine interface.
In another embodiment, a method of testing may be provided, the method comprising: acquiring an image of a human-computer interface; processing the acquired image to obtain information related to the test; the test result is determined based on the obtained information related to the test. Accordingly, the test result may be automatically determined, so that test efficiency may be improved and errors caused by manually determining the test result may be reduced or prevented.
The step of acquiring the image of the human-computer interface comprises the following steps: a picture of the human-machine interface is taken. Therefore, the test apparatus can be made to have a simple structure. In addition, the camera used for shooting can be fixed relative to the human-computer interface, so that the human-computer interface in the picture is in a fixed position, and further processing of the picture is facilitated later.
The step of obtaining information relating to the test comprises: determining at least one region of interest in the acquired image showing test related information; information related to the test is identified from the at least one region of interest, respectively. Accordingly, it is possible to selectively perform recognition in the image to obtain desired information.
The step of obtaining information relating to the test further comprises: dividing the acquired image into at least one sub-image respectively corresponding to the at least one region of interest, wherein the sub-image comprises a part of the acquired image, which is located in the corresponding region of interest; the step of identifying information relevant to the test includes: information relating to the test is identified from the at least one sub-image, respectively. Accordingly, image recognition can be performed only on the sub-image, so that the efficiency and accuracy of image recognition can be improved.
The step of acquiring the image of the human-computer interface comprises the following steps: acquiring a pre-test image of the human-machine interface before performing the test, the step of determining at least one region of interest comprising: and determining the region of interest in the image before the test, and obtaining the position information of the region of interest in the image before the test. The step of obtaining the image of the human-computer interface further comprises: acquiring a tested image of the human-computer interface after the test is finished, wherein the step of dividing the image into at least one sub-image comprises the following steps: and dividing the tested image into sub-images corresponding to the region of interest according to the position information of the region of interest in the image before testing. Therefore, it is possible to complete obtaining the position information of the region of interest before the test and then to use the obtained position information after the test is ended, thereby shortening the time required to confirm the test result.
The test related information includes information displayed by the human machine interface when the human machine interface completes the test. The step of determining the test result comprises: the test related information is compared with an expected output of the human-machine interface, and when the test related information is consistent with the expected output of the human-machine interface, the test is determined to be passed, and when the test related information is different from the expected output of the human-machine interface, the test is determined to be failed.
Providing a test protocol to the human-machine interface to cause the human-machine interface to operate according to the test protocol to test the human-machine interface, wherein the test protocol includes a desired output of the human-machine interface.
Drawings
The drawings are only for purposes of illustrating and explaining the present invention and are not to be construed as limiting the scope of the present invention, wherein,
FIG. 1 is a schematic diagram illustrating a test apparatus according to an exemplary embodiment;
FIG. 2 is a pre-test image showing a human-machine interface of a test object acquired by an image acquisition device according to an exemplary embodiment;
FIG. 3 is a post-test image showing a human-machine interface of a test object acquired by an image acquisition device according to an exemplary embodiment;
fig. 4 is a schematic block diagram showing an image processing apparatus according to an exemplary embodiment;
FIG. 5 is a flowchart illustrating a testing method according to an example embodiment.
Description of reference numerals:
110: image acquisition device
130: image processing apparatus
150: result determination device
170: test control device
131: region of interest determination unit
133: information recognition unit
135: image segmentation unit
Detailed Description
In order to more clearly understand the technical features, objects, and effects of the present invention, embodiments of the present invention will now be described with reference to the accompanying drawings.
FIG. 1 is a schematic diagram illustrating a test apparatus according to an exemplary embodiment.
As shown in fig. 1, the test apparatus according to an exemplary embodiment may include an image acquisition device 110, an image processing device 130, and a result determination device 150.
The image acquisition device 110 may acquire an image of the test object. Here, the test object may be an instrument panel-like human machine interface of the industrial equipment.
In one embodiment, the image acquisition apparatus 110 may include a camera for taking a picture of a human-machine interface as a test object, and may be provided separately from the human-machine interface. For example, the camera 110 may be arranged to be located at a relatively fixed position to the human-machine interface, so that the position of the human-machine interface in a picture taken by the camera 110 may be ensured to be relatively fixed.
Fig. 2 and 3 are a pre-test image and a post-test image, respectively, showing a human-machine interface as a test object acquired by an image acquisition apparatus according to an exemplary embodiment. As shown in fig. 2 and 3, the position of the human-machine interface in the image before the test (fig. 2) and the position in the image after the test (fig. 3) may be the same.
After the image of the human-machine interface is acquired by the image acquisition device 110, the image processing device 130 may receive the acquired image and may process the acquired image to obtain information related to the test. For example, the image acquisition apparatus 110 may communicate with the image processing apparatus 130 in a wired or wireless manner to transmit the acquired image to the image processing apparatus 130. The image processing apparatus 130 may be a computing device having an image processing function, such as a desktop computer, a laptop computer, a tablet computer, a smart phone, a personal digital assistant, or the like; or the image processing apparatus 130 may be a processing unit included in the computing device, for example, a central processing unit, an image processor, or the like; or the image processing apparatus 130 may be implemented by software that implements an image processing function.
Fig. 4 is a schematic block diagram illustrating an image processing apparatus according to an exemplary embodiment.
As shown in fig. 4, the image processing apparatus 130 may include a region-of-interest determining unit 131 and an information identifying unit 133. The region of interest determination unit 131 may be configured to determine a region of interest in the image acquired by the image acquisition apparatus 110. Here, the region of interest may be a portion of the acquired image that displays test-related information. The test related information may include information displayed by the human machine interface when the human machine interface completes the test, as will be described in more detail below.
In an exemplary embodiment, the region of interest determination unit 131 may determine the region of interest in, for example, a pre-test image (fig. 2) or a post-test image (fig. 3). The region of interest may be determined manually or automatically. For example, the region of interest determination unit 131 may determine the region of interest according to an input of an operator. In such an example, the operator may view the image before the test (fig. 2) or the image after the test (fig. 3) to determine the portion of the image in which the information related to the test is displayed. The area of interest may then be marked manually in the pre-test image (fig. 2) or the post-test image (fig. 3) by another input device such as a mouse, keyboard, touch screen, stylus, etc. Alternatively, the region-of-interest determining unit 131 may automatically determine the region of interest according to a preset condition (for example, position information of test-related information input in advance in the image).
Three regions of interest are shown in fig. 2 and 3, respectively. That is, in fig. 2, the regions of interest are portions showing 0.000, 2.000 and "blank" before rpm, respectively, and in fig. 3, the regions of interest are portions showing 0.120, 0.040 and 410 before rpm, respectively, wherein 0.000, 2.000, "blank", 0.120, 0.040 and 410 displayed by the human-machine interface may be information related to the test. However, the embodiment is not limited thereto, and the at least one region of interest may be determined by the region of interest determination unit 131 manually or automatically.
The information identifying unit 133 may identify information related to the test from the regions of interest, respectively. For example, the information recognizing unit 133 may perform recognition according to an image recognition algorithm such as a K-means algorithm to recognize information such as letters, numbers, and colors from an image. For example, the information identifying unit 133 identifies 0.000, 2.000, and "blank" as the test-related information from the three regions of interest in fig. 2, respectively. Here, 0.000 and 2.000 may be values of X and Z coordinates, respectively, and "blank" may mean that the rpm is 0. For example, the information identifying unit 133 identifies 0.120, 0.040 and 410 as test-related information from the three regions of interest in fig. 3, respectively. Here, 0.120 and 0.040 may be values of X coordinate and Z coordinate, respectively, and 410 may indicate that the rpm is 410 rpm.
In the above-described embodiment, the information identifying unit 133 may directly identify the region of interest determined by the region of interest determining unit 131 in the acquired image to obtain the test-related information. However, the exemplary embodiment is not limited thereto, and in another embodiment, the image processing apparatus 130 may further include an image segmentation unit 135, as shown in fig. 4. The image segmentation unit 135 may divide the image acquired by the image acquisition apparatus 130 into at least one sub-image respectively corresponding to the regions of interest determined by the region of interest determination unit 131. In other words, the image segmentation unit 135 may divide the image acquired by the image acquisition device 130 into one or more sub-images according to the region of interest determined by the region of interest determination unit 131. Each sub-image may comprise a portion of the image acquired by the image acquisition device 130 that is located in the corresponding region of interest. In the current embodiment, when the image segmentation unit 135 divides the acquired image into one or more sub-images, the information identification unit 133 may identify information related to the test from the sub-images, respectively.
In one exemplary embodiment, the region-of-interest determining unit 131 of the image processing apparatus 130 may determine the region of interest from the pre-test image (fig. 2), and store information related to the determined region of interest (e.g., position information of the region of interest in the pre-test image) in a memory (not shown). Then, when the test is ended, the information identifying unit 133 of the image processing apparatus 130 may obtain information related to the test from the post-test image (fig. 3) based on such position information. For example, because the position of the human-machine interface in the picture taken by the camera 110 may be relatively fixed, the image segmentation unit 135 may read the position information of the region of interest stored in the memory in the image before the test (fig. 2), and may divide the image after the test (fig. 3) into sub-images corresponding to the region of interest according to the read position information, and then the information recognition unit 133 may recognize the sub-images to obtain the information related to the test.
Referring back to fig. 1, when the test-related information is obtained, the result determination means 150 may determine the test result from the test-related information. The result determination means 150 may be integrated with the image processing apparatus 130 or realized by the same processing-function-equipped computing device or processing unit, or the result determination means 150 may be a separate processing-function-equipped computing device or processing unit separate from the image processing apparatus 130. In another embodiment, the result determination means 150 may be implemented by software performing the corresponding functions. For example, the result determination device 150 may compare the information related to the test to the expected output of the human-machine interface. The result determination means 150 may determine that the human-machine interface passes the test when the information related to the test coincides with the expected output of the human-machine interface. The result determination means 150 determines that the human-machine interface fails the test when the information related to the test is different from the expected output of the human-machine interface.
In general, the desired output of the human-machine interface may be an output that the user tests the human-machine interface and expects the human-machine interface to provide. The desired output of the human-machine interface may be included in a test protocol preset by the user. In addition, the test protocol may also include various parameters, instructions, commands, etc. that cause the human-machine interface to perform specific operations for testing purposes. The human-machine interface may be operated according to a test protocol to test the human-machine interface, and the test apparatus according to an exemplary embodiment may further include a test control device 170, as shown in fig. 1. The test control device 170 may provide a test scheme preset by a user to the human-machine interface, so that the human-machine interface operates according to the test scheme to test the human-machine interface. Here, the test control device 170 may be implemented by a separate or integrated computing device or processing unit, or may be implemented by software, similar to the image processing device 130 and the result determination device 150.
According to an exemplary embodiment, the human-machine interface can be controlled by the test equipment to be tested according to a specific test scheme, and whether the human-machine interface passes the test or not can be automatically determined. Therefore, it is possible to greatly improve the test efficiency and prevent or reduce errors caused by manually determining the test results. Furthermore, the region of interest may be determined in advance, for example, the position information of the region of interest may be determined and stored from the pre-test image of the human-machine interface, and the determined position information of the region of interest may be applied to the post-test image after one or more subsequent tests, so that image recognition may be performed more efficiently and thus it may be determined whether the test result passes. Therefore, the time taken for, for example, a plurality of tests can be shortened.
FIG. 5 is a flowchart illustrating a testing method according to an example embodiment. As shown in fig. 5, in step S510, an image of a human-machine interface may be acquired. For example, a photograph of a human-machine interface as a human-machine interface may be taken, as shown in fig. 2 and 3. Here, the test method according to the exemplary embodiment shown in fig. 5 may be performed by the test apparatus described according to the above embodiment, for example, a picture of a human-machine interface may be taken by an image acquisition device (e.g., a camera), and thus, the same or similar features will not be repeatedly described hereinafter.
The acquired image may then be processed to obtain test-related information in step S530. For example, at least one region of interest in the acquired image showing information related to the test may be determined, and information related to the test may be identified from the at least one region of interest, respectively. In one embodiment, the acquired image may be divided into at least one sub-image respectively corresponding to the at least one region of interest, and then test-related information may be identified from the sub-images respectively.
In another embodiment, a pre-test image of the human-machine interface may be first acquired before the test is performed, so that the region of interest may be determined in the pre-test image, and the position information of the region of interest in the pre-test image may be obtained. Such location information may be stored.
Here, although not shown in fig. 5, the test method according to another embodiment may further include controlling the human-machine interface to perform the test. For example, a test scheme preset by a user can be provided to the human-computer interface, so that the human-computer interface operates according to the test scheme, and the human-computer interface is tested. When the test is finished, the tested image of the human-computer interface can be obtained, and the tested image can be divided into sub-images corresponding to the region of interest according to the position information of the region of interest in the image before the test. Information related to the test may then be identified from the sub-images, respectively.
When the test-related information is determined, a test result may be determined according to the test-related information at step S550. For example, the test-related information may be compared to an expected output of the human-machine interface, a pass test may be determined when the test-related information coincides with the expected output of the human-machine interface, and a fail test may be determined when the test-related information differs from the expected output of the human-machine interface.
According to an exemplary embodiment, the human-machine interface can be controlled to be tested according to a specific test scheme through the test method, and whether the human-machine interface passes the test or not can be automatically determined. Therefore, it is possible to greatly improve the test efficiency and prevent or reduce errors caused by manually determining the test results. Further, the region of interest may be determined in advance, for example, the position information of the region of interest may be determined and stored from the pre-test image of the test object (human-machine interface), and the determined position information of the region of interest may be applied to the post-test image after the subsequent test or tests, so that the image recognition may be performed more efficiently and thus it may be determined whether the test result passes or not. Therefore, the time taken for, for example, a plurality of tests can be shortened.
It should be understood that although the present description has been described in terms of various embodiments, not every embodiment includes only a single embodiment, and such description is for clarity purposes only, and those skilled in the art will recognize that the embodiments described herein may be combined as suitable to form other embodiments, as will be appreciated by those skilled in the art.
The above description is only an exemplary embodiment of the present invention, and is not intended to limit the scope of the present invention. Any equivalent alterations, modifications and combinations can be made by those skilled in the art without departing from the spirit and principles of the invention.
Claims (16)
1. Test apparatus, characterized in that the test apparatus comprises:
an image acquisition device (110) configured to acquire an image of a human-machine interface;
an image processing device (130) configured to process the image acquired by the image acquisition device to obtain test-related information, the test-related information including information displayed by the human-machine interface when the human-machine interface completes the test;
a result determination means (150) configured to determine a result of the test based on the information related to the test obtained by the image processing means.
2. The test apparatus of claim 1, wherein the image acquisition device comprises a camera for taking a picture of the human-machine interface.
3. The test apparatus of claim 1, wherein the image processing device comprises:
a region-of-interest determination unit (131) configured to determine at least one region of interest showing test-related information in the image acquired by the image acquisition means;
an information identification unit (133) configured to identify test-related information from the at least one region of interest, respectively.
4. The test apparatus of claim 3, wherein the image processing device further comprises:
an image segmentation unit (135) configured to divide the image acquired by the image acquisition means into at least one sub-image corresponding to the at least one region of interest, respectively, wherein the sub-image comprises a portion of the image acquired by the image acquisition means which is located in the corresponding region of interest,
wherein the information identifying unit identifies information related to the test from the at least one sub-image, respectively.
5. The test apparatus according to claim 4, wherein the image acquiring means is configured to acquire a pre-test image of the human-machine interface before the test is performed, and the region-of-interest determining unit is configured to determine the region of interest in the pre-test image and obtain the position information of the region of interest in the pre-test image.
6. The test apparatus according to claim 5, wherein the image acquiring means is further configured to acquire a post-test image of the human-machine interface after the test is completed, and the image segmentation unit is configured to divide the post-test image into sub-images corresponding to the region of interest based on the position information of the region of interest in the pre-test image.
7. The test apparatus of claim 1, wherein the result determination means is configured to compare the test-related information with an expected output of the human-machine interface, the result determination means determining that the test is passed when the test-related information coincides with the expected output of the human-machine interface, and the result determination means determining that the test is not passed when the test-related information differs from the expected output of the human-machine interface.
8. The test apparatus of claim 1, wherein the test apparatus further comprises:
a test control device (170) is configured to provide a test protocol to the human-machine interface to cause the human-machine interface to operate in accordance with the test protocol to test the human-machine interface, wherein the test protocol includes a desired output of the human-machine interface.
9. A method of testing, the method comprising:
acquiring an image of a human-computer interface;
processing the acquired image to obtain information related to the test, wherein the information related to the test comprises information displayed by a human-computer interface when the human-computer interface finishes the test;
the test result is determined based on the obtained information related to the test.
10. The method of claim 9, wherein the step of obtaining an image of a human-machine interface comprises: a picture of the human-machine interface is taken.
11. The method of claim 9, wherein the step of obtaining information related to the test comprises:
determining at least one region of interest in the acquired image showing test related information;
information related to the test is identified from the at least one region of interest, respectively.
12. The method of claim 11, wherein the step of obtaining information related to the test further comprises: dividing the acquired image into at least one sub-image respectively corresponding to the at least one region of interest, wherein the sub-image comprises a part of the acquired image, which is located in the corresponding region of interest;
the step of identifying information relevant to the test includes: information relating to the test is identified from the at least one sub-image, respectively.
13. The method of claim 12,
the step of acquiring the image of the human-computer interface comprises the following steps: acquiring a pre-test image of the human-machine interface before performing the test,
the step of determining at least one region of interest comprises: and determining the region of interest in the image before the test, and obtaining the position information of the region of interest in the image before the test.
14. The method of claim 13,
the step of obtaining the image of the human-computer interface further comprises: a post-test image of the human-machine interface is acquired after the test is completed,
the step of dividing into at least one sub-image comprises: and dividing the tested image into sub-images corresponding to the region of interest according to the position information of the region of interest in the image before testing.
15. The method of claim 9, wherein the step of determining the test results comprises: the test related information is compared with an expected output of the human-machine interface, and when the test related information is consistent with the expected output of the human-machine interface, the test is determined to be passed, and when the test related information is different from the expected output of the human-machine interface, the test is determined to be failed.
16. The method of claim 9, wherein the method further comprises:
providing a test protocol to the human-machine interface to cause the human-machine interface to operate according to the test protocol to test the human-machine interface, wherein the test protocol includes a desired output of the human-machine interface.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610875978.1A CN107885180B (en) | 2016-09-30 | 2016-09-30 | Test apparatus and test method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610875978.1A CN107885180B (en) | 2016-09-30 | 2016-09-30 | Test apparatus and test method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107885180A CN107885180A (en) | 2018-04-06 |
CN107885180B true CN107885180B (en) | 2021-03-16 |
Family
ID=61770201
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610875978.1A Active CN107885180B (en) | 2016-09-30 | 2016-09-30 | Test apparatus and test method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107885180B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111782552B (en) * | 2020-08-07 | 2021-05-18 | 广州极点三维信息科技有限公司 | Automatic testing method and device based on region division |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103631246A (en) * | 2012-08-20 | 2014-03-12 | 珠海格力电器股份有限公司 | Test method, test device, test tool and test system of electronic equipment |
CN103955678A (en) * | 2014-05-13 | 2014-07-30 | 深圳市同洲电子股份有限公司 | Image recognition method and device |
CN104598702A (en) * | 2013-10-31 | 2015-05-06 | 鸿富锦精密工业(深圳)有限公司 | Method and system for generating test report |
CN104793068A (en) * | 2014-01-22 | 2015-07-22 | 佛山市顺德区顺达电脑厂有限公司 | Image acquisition-based automatic test method |
CN105389809A (en) * | 2015-10-26 | 2016-03-09 | 广州视源电子科技股份有限公司 | Display performance testing method, system and device |
-
2016
- 2016-09-30 CN CN201610875978.1A patent/CN107885180B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103631246A (en) * | 2012-08-20 | 2014-03-12 | 珠海格力电器股份有限公司 | Test method, test device, test tool and test system of electronic equipment |
CN104598702A (en) * | 2013-10-31 | 2015-05-06 | 鸿富锦精密工业(深圳)有限公司 | Method and system for generating test report |
CN104793068A (en) * | 2014-01-22 | 2015-07-22 | 佛山市顺德区顺达电脑厂有限公司 | Image acquisition-based automatic test method |
CN103955678A (en) * | 2014-05-13 | 2014-07-30 | 深圳市同洲电子股份有限公司 | Image recognition method and device |
CN105389809A (en) * | 2015-10-26 | 2016-03-09 | 广州视源电子科技股份有限公司 | Display performance testing method, system and device |
Also Published As
Publication number | Publication date |
---|---|
CN107885180A (en) | 2018-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109961040B (en) | Identity card area positioning method and device, computer equipment and storage medium | |
US9600893B2 (en) | Image processing device, method, and medium for discriminating a type of input image using non-common regions | |
CN107845113A (en) | Object element localization method, device and ui testing method, apparatus | |
US11323577B2 (en) | Image processing device for creating an album | |
US9672589B2 (en) | Terminal device and drawing display program for terminal device | |
CN103377119A (en) | Automatic nonstandard control testing method and device | |
CN110136153A (en) | A kind of image processing method, equipment and storage medium | |
WO2021159736A1 (en) | Application compatibility test method and apparatus, and computer device | |
CN113282488A (en) | Terminal test method and device, storage medium and terminal | |
US20150271396A1 (en) | Electronic device and method for image data processing | |
CN112381092A (en) | Tracking method, device and computer readable storage medium | |
CN107885180B (en) | Test apparatus and test method | |
US11551381B2 (en) | Information processing device, information processing system, and non-transitory computer-readable medium storing information processing program | |
CN114116514A (en) | User interface test acceptance method | |
CN113807204A (en) | Human body meridian recognition method and device, equipment and storage medium | |
CN108629219B (en) | Method and device for identifying one-dimensional code | |
CN108805931B (en) | Positioning detection method and device of AR product and computer readable storage medium | |
CN111199533A (en) | Image processing apparatus | |
CN110916609A (en) | Vision detection device | |
KR101966423B1 (en) | Method for image matching and apparatus for executing the method | |
KR20170067023A (en) | Apparatus and method for inspecting drawing | |
JP7505635B2 (en) | Segmentation of Continuous Dynamic Scans | |
CN113167568B (en) | Coordinate calculation device, coordinate calculation method, and computer-readable recording medium | |
JP6175904B2 (en) | Verification target extraction system, verification target extraction method, verification target extraction program | |
CN113377983A (en) | Method and device for extracting image features of three-dimensional image and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |