CN108495125B - Camera module testing method, device and medium - Google Patents

Camera module testing method, device and medium Download PDF

Info

Publication number
CN108495125B
CN108495125B CN201810422215.0A CN201810422215A CN108495125B CN 108495125 B CN108495125 B CN 108495125B CN 201810422215 A CN201810422215 A CN 201810422215A CN 108495125 B CN108495125 B CN 108495125B
Authority
CN
China
Prior art keywords
images
groups
local
position information
corresponding relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810422215.0A
Other languages
Chinese (zh)
Other versions
CN108495125A (en
Inventor
许克亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Q Technology Co Ltd
Original Assignee
Kunshan Q Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Q Technology Co Ltd filed Critical Kunshan Q Technology Co Ltd
Priority to CN201810422215.0A priority Critical patent/CN108495125B/en
Publication of CN108495125A publication Critical patent/CN108495125A/en
Application granted granted Critical
Publication of CN108495125B publication Critical patent/CN108495125B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a method, equipment and a medium for testing a camera module, wherein the method comprises the following steps: receiving N initial images sent by N camera units, wherein the N initial images correspond to the N camera units one by one, and N is a natural number; extracting local images of a preset area of each initial image in the N initial images, wherein N groups of local images are total, and storing corresponding relation information of the N groups of local images and the N camera units and first position information of the preset area; and sending the corresponding relation information, the first position information and the N groups of local images to a test device. The method, the equipment and the medium provided by the invention are used for solving the technical problems that the number of cameras capable of being tested simultaneously is limited due to the limitation of network transmission capability when the camera unit is tested in the prior art, and the testing efficiency is low, and the technical effect of improving the testing efficiency is realized.

Description

Camera module testing method, device and medium
Technical Field
The invention relates to the technical field of testing, in particular to a method, equipment and medium for testing a camera module.
Background
In order to meet the requirements of users, cameras are added to products such as mobile phones, tablet computers and smart watches, and before the cameras are installed on the products, various parameters of the cameras need to be tested first so as to screen out or correct the cameras which do not meet the requirements.
At present, the test to each item parameter of camera often needs to send the image that the camera was shot to test equipment and carry out test analysis. However, with the development of the technology, the pixels, the frame rate, and the like of the cameras are higher and higher, the size of the images obtained by shooting is larger and larger, and the network resources occupied when the images are transmitted to the test equipment are also increased, so that in the test process, the network transmission capability is limited, and it is difficult to simultaneously test a plurality of cameras.
Therefore, when the camera unit is tested in the prior art, the number of cameras which can be tested simultaneously is limited due to the limitation of network transmission capacity, and the technical problem of low testing efficiency exists.
Disclosure of Invention
In view of the above problems, the present invention has been made to provide a camera module testing method, apparatus, and medium that overcome the above problems or at least partially solve the above problems.
In a first aspect, a method for testing a camera module is provided, where the method is applied to a processing device, and the method includes:
receiving N initial images sent by N camera units, wherein the N initial images correspond to the N camera units one by one, and N is a natural number;
extracting local images of a preset area of each initial image in the N initial images, wherein N groups of local images are total, and storing corresponding relation information of the N groups of local images and the N camera units and first position information of the preset area;
sending the corresponding relation information, the first position information and the N groups of local images to a test device;
the corresponding relation information represents the corresponding relation between the N groups of local images and the N shooting units, and the first position information represents the position information of the preset area on the N initial images.
Optionally, after the extracting the local images of each of the N initial images, for N groups of local images, the method further includes: splicing the N groups of local images into a target image, and storing second position information of the N groups of local images on the target image; the sending the corresponding relationship information, the first position information, and the N groups of local images to a test device includes: and sending the corresponding relation information, the first position information, the second position information and the target image to a test device.
Optionally, before the extracting the local image of the preset region of each initial image in the N initial images, the method further includes: receiving a parameter to be tested; and determining the preset area according to a preset rule based on the parameters to be tested.
Optionally, before the extracting the local image of the preset region of each initial image in the N initial images, the method further includes: receiving an input operation for determining the preset area; and determining the preset area based on the input operation.
In a second aspect, a method for testing a camera module is provided, where the method is applied to a testing device, and the method includes:
receiving corresponding relation information, first position information and N groups of local images sent by processing equipment; the corresponding relation information represents the corresponding relation between the N groups of local images and N camera units, wherein N is a natural number;
restoring to obtain N restored images corresponding to the N groups of local images one by one according to the first position information, the corresponding relation information and the N groups of local images;
testing the N restored images to obtain N groups of test result parameters which are in one-to-one correspondence with the N restored images;
and determining the test result parameters of each camera unit in the N camera units according to the corresponding relation information and the N groups of test result parameters.
Optionally, the receiving the correspondence information, the first location information, and the N groups of local images sent by the processing device includes: receiving corresponding relation information, first position information, second position information and a target image which are sent by processing equipment, wherein the second position information represents the positions of N groups of local images on the target image; the restoring according to the first position information, the correspondence information, and the N groups of local images to obtain N restored images corresponding to the N groups of local images one to one, includes: splitting the target image into N groups of local images according to the second position information; and restoring to obtain the N restored images which are in one-to-one correspondence with the N groups of local images according to the first position information, the corresponding relation information and the N groups of local images.
In a third aspect, a processing device is provided, comprising:
the first receiving module is used for receiving N initial images sent by N camera units, the N initial images correspond to the N camera units one by one, and N is a natural number;
the extraction module is used for extracting local images of a preset area of each initial image in the N initial images, wherein N groups of local images are total, and the corresponding relation information of the N groups of local images and the N camera units and the first position information of the preset area are stored; the corresponding relation information represents the corresponding relation between the N groups of local images and the N shooting units, and the first position information represents the position information of the preset area on the N initial images;
and the sending module is used for sending the corresponding relation information, the first position information and the N groups of local images to a test device.
In a fourth aspect, there is provided a test apparatus comprising:
the second receiving module is used for receiving the corresponding relation information, the first position information and the N groups of local images which are sent by the processing equipment; the corresponding relation information represents the corresponding relation between the N groups of local images and N camera units, wherein N is a natural number;
the restoring module is used for restoring and obtaining N restored images which correspond to the N groups of local images one by one according to the first position information and the N groups of local images;
the test module is used for testing the N restored images to obtain N groups of test result parameters which are in one-to-one correspondence with the N restored images;
and the determining module is used for determining the test result parameters of each camera unit in the N camera units according to the corresponding relation information and the N groups of test result parameters.
In a fifth aspect, an electronic device is provided, which comprises a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the method according to any one of the first to second aspects.
A sixth aspect provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of any one of the first to second aspects.
The technical scheme provided by the embodiment of the invention at least has the following technical effects or advantages:
the camera module testing method, the device and the medium provided by the embodiment of the invention extract the local images of each of the N initial images sent by the N camera units, and send the extracted N groups of local images, the first position information of the preset region of the extracted local images and the corresponding relationship information of the local images and the camera units to the testing device together, so that the testing device can restore the N initial images to test based on the first position information and the N groups of local images, and determine the testing result parameters of each camera according to the corresponding relationship information. The extracted local image is transmitted instead of the complete image shot by the camera shooting unit, so that network transmission resources occupied when the image data are sent to the test equipment are reduced, a plurality of camera shooting units can be tested at the same time, and the test efficiency is improved.
Furthermore, the embodiment of the present invention splices N groups of extracted local images into a target image, and sends the target image and second position information of the N groups of local images on the target image to the test equipment, so as to replace the N groups of local images by transmitting consecutive target images on the basis of ensuring that the test equipment can restore the target image based on the second position information, thereby avoiding the problems of large computation overhead and time-consuming processing caused by the need of inserting a marker bit between the local images of each camera to divide the N groups of local images, and effectively improving the test efficiency.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a schematic diagram of the connection of an image pickup unit, processing equipment and testing equipment in the embodiment of the invention;
FIG. 2 is a flowchart of a method for testing a camera module on a processing device according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for testing a camera module according to an embodiment of the present invention on a testing device side;
FIG. 4 is a schematic diagram of an embodiment of a restored image;
FIG. 5 is a schematic structural diagram of a processing apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a test apparatus according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the invention;
FIG. 8 is a schematic structural diagram of a storage medium according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Before describing the embodiments, the connection of the image capturing unit, the processing device, and the testing device in the embodiments of the present invention is described, as shown in fig. 1, the image capturing unit 101 and the processing device 102 are connected by a wired or wireless network, and the processing device 102 and the testing device 103 are connected by a wired or wireless network in the embodiments of the present invention. The number of the image pickup units 101 may be one or more, and is not limited herein.
Example one
The embodiment provides a method for testing a camera module, which is applied to a processing device, and as shown in fig. 2, the method includes:
step S201, receiving N initial images sent by N camera units, wherein the N initial images correspond to the N camera units one by one, and N is a natural number;
step S202, extracting local images of a preset area of each initial image in the N initial images, wherein N groups of local images are total, and storing corresponding relation information of the N groups of local images and the N shooting units and first position information of the preset area; the corresponding relation information represents the corresponding relation between the N groups of local images and the N shooting units, and the first position information represents the position information of the preset area on the N initial images;
step S203, sending the corresponding relationship information, the first position information, and the N groups of local images to a testing device.
It should be noted that, in the embodiment of the present application, the camera unit may be an independent camera, or may also be a camera on an electronic product such as a mobile phone or a tablet computer, which is not limited herein; the processing device may be a circuit board, a computer, or a dedicated processing chip, and is not limited herein; the test device may be a circuit board, a computer, or a dedicated processing chip, again without limitation.
The following describes in detail the detailed implementation steps of the method for testing a camera module according to this embodiment with reference to fig. 1:
first, step S201 is executed to receive N initial images sent by N image capturing units, where the N initial images correspond to the N image capturing units one to one, and N is a natural number.
In the embodiment of the present application, N initial images correspond to N imaging units one to one, which means that: each of the N image pickup units picks up and acquires an initial image, and the N image pickup units acquire N initial images in total, wherein each image pickup unit corresponds to the initial image picked up and acquired by the image pickup unit.
In a specific implementation process, the N image capturing units may transmit the N initial images to the processing device in parallel, may transmit the N initial images to the processing device in series, and may transmit the N initial images to the processing device in partial series and partial parallel, which is not limited herein.
Specifically, after receiving N initial images, the processing device buffers the N initial images for the subsequent step S202.
Then, step S202 is executed to extract local images of a preset region of each of the N initial images, N sets of local images are total, and correspondence information between the N sets of local images and the N imaging units and first position information of the preset region are stored.
Before extracting the local image, the extracted preset region needs to be determined, and in the specific implementation process, there may be a plurality of methods for determining the preset region, which are listed as two examples below:
first, it is determined based on the parameter to be tested.
Firstly, receiving parameters to be tested, and then determining a preset area according to a preset rule based on the parameters to be tested.
Specifically, the processing device may receive the parameters to be tested sent by the testing device in advance, or may receive the parameters to be tested input by the tester on the processing device, which is not limited herein.
The method for determining the preset area based on the parameters to be tested may be: and presetting corresponding information of the test parameters and the preset area, and determining the preset area according to the corresponding information after the processing equipment receives the parameters to be tested.
For example, a preset region corresponding to the test definition may be preset as a middle region of the image, a preset region corresponding to the test edge imaging parameter is an edge region of the image, and a preset region corresponding to the test corner exposure parameter is four corner regions of the image. And when the parameters to be tested received by the processing equipment are corner exposure parameters, determining the preset areas as four vertex angles of the N initial images.
And secondly, the operation is determined according to the input operation of a tester.
That is, an input operation for determining a preset region is received first, and then the preset region is determined based on the input operation.
Specifically, the input operation may be an operation in which a user inputs coordinate data of a preset region to cause the processing device to determine the preset region from the coordinate data; the input operation may also be an operation selected by the user among options such as "middle", "edge", "four corners", "left side", "right side", and the like displayed, so that the processing device determines the preset region according to the region option selected by the user; the input operation may also be an operation of a user moving a mouse or sliding a finger on the displayed virtual image to make the processing device use the area circled by the user as a preset area, which is not limited herein.
Of course, in the specific implementation process, the methods for determining the preset region are not limited to the above two methods, and different methods for determining the preset region may be selected according to the test requirement, which are not limited herein and are not listed.
After the preset area is determined, the processing device extracts the image in the preset area on each image in the N initial images as the local image corresponding to the extracted image, and N groups of local images are extracted from the N initial images correspondingly. In a specific implementation, there may be one or more partial images in each group of partial images according to the testing requirement.
Further, the processing equipment stores the corresponding relation information of the N groups of local images and the N camera units, so that the subsequent testing equipment can distinguish the image testing result parameters of each camera unit; the processing device also stores first position information representing positions of the preset area on the N initial images, so that a subsequent testing device can restore a restored image for testing according to the N groups of local images.
For example, the recording format of the correspondence information may be K-n, where n is a reference number of a certain partial image, and K is a reference number of an image capturing unit corresponding to the partial image. The first position information may be recorded in the form of n-x, y, z, o, n being the index of a local image, and x, y, z, o being the coordinates of the four vertex positions of the local image on the initial image corresponding to the local image. Of course, the recording form of the first position information may also be n-m, where n is a mark number of a certain partial image, and m is a position mark number preset on the initial image corresponding to the partial image, and the position mark number may be a number, a letter, or other symbols, which is not limited herein.
In the embodiment of the present application, it is considered that if N groups of local images are directly sent to the testing device, in order to distinguish each local image, it is often necessary to add a mark bit at the beginning and the end of each image, so that each local image can be separated, and setting and adding the mark bit consumes more computing resources, which also prolongs the processing time, thereby reducing the testing efficiency. And considering that image data may be mixed with control data (such as I2C data) for transmission, the control data also needs to insert a flag signal to help distinguish, and too many flag signals reduce transmission efficiency and increase processing complexity.
In order to solve the above problem, the present embodiment further includes, after extracting N groups of partial images: and splicing the N groups of local images into a target image, and further, storing second position information of the N groups of local images on the target image by the processing equipment, and sending the corresponding relation information, the first position information, the second position information and the target image to the testing equipment, so that the subsequent testing equipment can split the N groups of local images from the target image according to the second position information.
In the specific implementation process, there are many methods for stitching N groups of local images into a target image, and two methods are listed as follows:
first, random splicing.
Namely, each partial image in the N groups of partial images is spliced together randomly to form a target image. And recording the position of each local image on the target image and storing the position as second position information.
For example, n-a, b, c, and d are recorded in the second position information, where n is a label of a local image, and a, b, c, and d are coordinates of four vertices of the local image on the target image. Of course, the recording form of the second position information may also be n-k, where n is a mark number of a certain partial image, and k is a preset position mark number of the partial image on the target image, and the position mark number may be a number, a letter, or other symbols, which is not limited herein.
And secondly, splicing each group of local images firstly and then splicing all the local images.
The same group of local images are spliced firstly, and then the spliced images of each group are randomly spliced to form a target image. And recording the position of each group of local images on the target image and storing the position information to second position information.
Of course, in the specific implementation process, other splicing manners may also be adopted, and are not limited herein.
A method for restoring and testing the target image or the N sets of partial images after the test device receives the target image or the N sets of partial images is described in detail in embodiment two.
Specifically, network transmission resources occupied when image data are sent to test equipment are reduced by transmitting the extracted local images instead of the complete images shot by the camera units, so that a plurality of camera units can be tested simultaneously, and the test efficiency is improved. Furthermore, by transmitting continuous target images instead of N groups of local images, the problems of high calculation overhead and time-consuming processing caused by the fact that the N groups of local images are transmitted respectively and are segmented by inserting mark bits between the local images of the cameras can be avoided, and the testing efficiency is effectively improved.
Based on the same inventive concept, the application also provides a method of the testing device side in fig. 1 corresponding to the embodiment, which is described in detail in the second embodiment.
Example two
The embodiment provides a method for testing a camera module, as shown in fig. 3, where the method is applied to a testing apparatus, and includes:
step S301, receiving corresponding relation information, first position information and N groups of local images sent by processing equipment; the corresponding relation information represents the corresponding relation between the N groups of local images and N camera units, wherein N is a natural number;
step S302, according to the first position information and the N groups of local images, restoring to obtain N restored images corresponding to the N groups of local images one by one;
step S303, testing the N restored images to obtain N groups of test result parameters which are in one-to-one correspondence with the N restored images;
step S304, determining the test result parameters of each camera unit in the N camera units according to the corresponding relation information and the N groups of test result parameters.
The following describes in detail the detailed steps of the method for testing the camera module on the testing device side with reference to fig. 3:
firstly, executing step S301, receiving corresponding relation information, first position information and N groups of local images sent by a processing device; the correspondence information represents the correspondence between the N groups of partial images and N imaging units, where N is a natural number.
In view of the detailed description of the first embodiment, the corresponding relationship information, the first position information, and the N groups of local images, for the sake of brevity of the description, a description thereof will not be repeated here.
It should be noted that, if the test device receives a target image formed by splicing N groups of partial images, the test device also receives second position information sent by the processing device, where the second position information represents positions of the N groups of partial images on the target image, so that the test device splits the N groups of partial images from the target image according to the second position information.
Then, step S302 is executed to restore and obtain N restored images corresponding to the N groups of partial images one to one according to the first position information, the correspondence information, and the N groups of partial images.
For example, the recording format of the correspondence information may be K-n, where n is a label of a local image, K is a label of an image capturing unit corresponding to the local image, the local images that are the same as K are confirmed to be local images in the same group, and then first position information of the local images in the same group is obtained, assuming that the first position information format is n-m, where n is a label of the local image, and m is a position label of the local image on a corresponding initial image. And restoring the same group of partial images to the same restored image according to the first position information according to the mark number n and the corresponding m position mark number, specifically, the m position mark number may be marked by 1 to 9 as shown in fig. 4, or may be marked by a letter or other symbols, which is not limited herein.
Specifically, the N restored images obtained by restoring are not exactly the same as the N original images in the first embodiment, and only a partial image of a preset area is included in the N restored images, and the rest of the areas may be set to be blank or transparent. For example, as shown in fig. 4, a is an initial image, b is an extracted local image which is an image of four corners of the initial image, c is a restored image, and the remaining area of the restored image except the local image is displayed in a solid color.
It should be noted that, if the target image formed by splicing the N groups of partial images is received in step S301, before the N restored images are restored, the N groups of partial images need to be split from the target image according to the second position information. For example, the second position information may be n-a, b, c, d, where n is a label of a local image, and a, b, c, d are coordinates of four vertices of the local image on the target image, and the local image is extracted according to a, b, c, d and labeled as n.
After the N restored images are restored, step S303 is executed to test the N restored images, and N sets of test result parameters corresponding to the N restored images one to one are obtained.
In a specific implementation, the test result parameter may be a hue, a sharpness, a brightness, or the like, and is not limited herein.
Then, step S304 is executed to determine a test result parameter of each of the N image capturing units according to the correspondence information and the N sets of test result parameters.
For example, the form of the correspondence information is K-n, n is a label of a certain local image, K is a label of the camera unit corresponding to the local image, and after the test restored image a is assumed to obtain the test result parameter b, the label K of the corresponding camera unit can be determined by referring to the correspondence information K-n according to the label n of the local image on the restored image a, so that the test result parameter b is determined to be the test result parameter corresponding to the camera unit of the label K.
Specifically, in the embodiment of the present invention, a local image is extracted from each of N initial images sent by N received image capturing units, and the N groups of extracted local images, first position information of a preset region of the extracted local images, and correspondence information of the local images and the image capturing units are sent to a test device together, so that the test device can restore the N initial images to perform a test based on the first position information and the N groups of local images, and determine a test result parameter of each camera according to the correspondence information. The extracted local image is transmitted instead of the complete image shot by the camera shooting unit, so that network transmission resources occupied when the image data are sent to the test equipment are reduced, a plurality of camera shooting units can be tested at the same time, and the test efficiency is improved.
Furthermore, the embodiment of the present invention splices N groups of extracted local images into a target image, and sends the target image and second position information of the N groups of local images on the target image to the test equipment, so as to replace the N groups of local images by transmitting consecutive target images on the basis of ensuring that the test equipment can restore the target image based on the second position information, thereby avoiding the problems of large computation overhead and time-consuming processing caused by the need of inserting a marker bit between the local images of each camera to divide the N groups of local images, and effectively improving the test efficiency.
Based on the same inventive concept, the embodiment of the invention also provides an embodiment of a device corresponding to the first embodiment, which is described in detail in the third embodiment.
EXAMPLE III
As shown in fig. 5, the present embodiment provides a processing apparatus including:
a first receiving module 501, configured to receive N initial images sent by N camera units, where the N initial images correspond to the N camera units one to one, and N is a natural number;
an extracting module 502, configured to extract local images of a preset region of each of the N initial images, where N groups of local images are total, and store correspondence information between the N groups of local images and the N camera units and first position information of the preset region; the corresponding relation information represents the corresponding relation between the N groups of local images and the N shooting units, and the first position information represents the position information of the preset area on the N initial images;
a sending module 503, configured to send the correspondence information, the first location information, and the N groups of local images to a testing device.
Since the device described in the third embodiment of the present invention is a device used for implementing the method of the first embodiment of the present invention, a person skilled in the art can understand the specific structure and the deformation of the device based on the method described in the first embodiment of the present invention, and thus the detailed description is omitted here. All the devices adopted by the method of the first embodiment of the invention belong to the protection scope of the invention.
Based on the same inventive concept, the embodiment of the invention also provides a device corresponding to the method in the second embodiment, which is detailed in the fourth embodiment.
Example four
As shown in fig. 6, the present embodiment provides a test apparatus including:
a second receiving module 601, configured to receive the correspondence information, the first location information, and N groups of local images sent by the processing device; the corresponding relation information represents the corresponding relation between the N groups of local images and N camera units, wherein N is a natural number;
a restoring module 602, configured to restore, according to the first position information and the N groups of local images, N restored images corresponding to the N groups of local images one to one;
a testing module 603, configured to test the N restored images, and obtain N sets of test result parameters corresponding to the N restored images one to one;
a determining module 604, configured to determine, according to the correspondence information and the N sets of test result parameters, a test result parameter of each of the N camera units.
Since the device described in the fourth embodiment of the present invention is a device used for implementing the method in the second embodiment of the present invention, a person skilled in the art can understand the specific structure and the deformation of the device based on the method described in the second embodiment of the present invention, and thus the detailed description is omitted here. All the devices adopted by the method of the second embodiment of the invention belong to the protection scope of the invention.
Based on the same inventive concept, the embodiment of the present invention further provides an electronic device corresponding to the method in the first embodiment or the second embodiment, which is described in detail in the fifth embodiment.
EXAMPLE five
As shown in fig. 7, the present embodiment provides an electronic device, which includes a memory 710, a processor 720, and a computer program 711 stored in the memory 710 and running on the processor 720, wherein when the processor 720 executes the computer program 711, any of the first to second embodiments of the present invention is implemented.
Since the electronic device described in the fifth embodiment of the present invention is a device used for implementing any one of the methods in the first to second embodiments of the present invention, a person skilled in the art can understand the specific structure and the modification of the device based on the methods described in the first to second embodiments of the present invention, and thus the details are not described herein. Any apparatus used in any of the first to second embodiments of the present invention is within the scope of the present invention.
Based on the same inventive concept, the embodiment of the present invention further provides a storage medium corresponding to the method in the first embodiment or the second embodiment, which is described in detail in the sixth embodiment.
EXAMPLE six
As shown in fig. 8, the present embodiment provides a computer-readable storage medium 800, on which a computer program 811 is stored, wherein the computer program 811 is implemented by any one of the first embodiment to the second embodiment of the present invention when being executed by a processor.
The technical scheme provided by the embodiment of the invention at least has the following technical effects or advantages:
the camera module testing method, the device and the medium provided by the embodiment of the invention extract the local images of each of the N initial images sent by the N camera units, and send the extracted N groups of local images, the first position information of the preset region of the extracted local images and the corresponding relationship information of the local images and the camera units to the testing device together, so that the testing device can restore the N initial images to test based on the first position information and the N groups of local images, and determine the testing result parameters of each camera according to the corresponding relationship information. The extracted local image is transmitted instead of the complete image shot by the camera shooting unit, so that network transmission resources occupied when the image data are sent to the test equipment are reduced, a plurality of camera shooting units can be tested at the same time, and the test efficiency is improved.
Furthermore, the embodiment of the present invention splices N groups of extracted local images into a target image, and sends the target image and second position information of the N groups of local images on the target image to the test equipment, so as to replace the N groups of local images by transmitting consecutive target images on the basis of ensuring that the test equipment can restore the target image based on the second position information, thereby avoiding the problems of large computation overhead and time-consuming processing caused by the need of inserting a marker bit between the local images of each camera to divide the N groups of local images, and effectively improving the test efficiency.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in an apparatus of an embodiment of the invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (10)

1. A camera module testing method is applied to processing equipment, and comprises the following steps:
receiving N initial images sent by N camera units, wherein the N initial images correspond to the N camera units one by one, and N is a natural number;
determining a preset region corresponding to a parameter to be tested according to the parameter to be tested, extracting local images of the preset region of each initial image in the N initial images, wherein N groups of local images are total, and storing corresponding relation information of the N groups of local images and the N camera units and first position information of the preset region;
sending the corresponding relation information, the first position information and the N groups of local images to a test device;
the corresponding relation information represents the corresponding relation between the N groups of local images and the N shooting units, and the first position information represents the position information of the preset area on the N initial images.
2. The method of claim 1, wherein:
after the extracting the local images of the preset area of each initial image in the N initial images, for N groups of local images, the method further includes: splicing the N groups of local images into a target image, and storing second position information of the N groups of local images on the target image;
the sending the corresponding relationship information, the first position information, and the N groups of local images to a test device includes: and sending the corresponding relation information, the first position information, the second position information and the target image to a test device.
3. The method of claim 1, wherein before said extracting the local image of the preset region of each of the N initial images, further comprising:
receiving a parameter to be tested;
and determining the preset area according to a preset rule based on the parameters to be tested.
4. The method of claim 1, wherein before said extracting the local image of the preset region of each of the N initial images, further comprising:
receiving an input operation for determining the preset area;
and determining the preset area based on the input operation.
5. A camera module testing method is characterized in that the method is applied to testing equipment, and the method comprises the following steps:
receiving corresponding relation information, first position information and N groups of local images sent by processing equipment; the corresponding relation information represents the corresponding relation between the N groups of local images and N camera units, wherein N is a natural number; the first position information represents the position of a preset area where the local image is located on the initial image; the preset area is an area corresponding to the to-be-tested parameter determined according to the to-be-tested parameter to be tested;
restoring to obtain N restored images corresponding to the N groups of local images one by one according to the first position information, the corresponding relation information and the N groups of local images;
testing the parameters to be tested of the N restored images to obtain N groups of test result parameters which are in one-to-one correspondence with the N restored images;
and determining the test result parameters of each camera unit in the N camera units according to the corresponding relation information and the N groups of test result parameters.
6. The method of claim 5, wherein:
the receiving of the correspondence information, the first location information, and the N groups of partial images sent by the processing device includes: receiving corresponding relation information, first position information, second position information and a target image which are sent by processing equipment, wherein the second position information represents the positions of N groups of local images on the target image;
the restoring according to the first position information, the correspondence information, and the N groups of local images to obtain N restored images corresponding to the N groups of local images one to one, includes: splitting the target image into N groups of local images according to the second position information; and restoring to obtain the N restored images which are in one-to-one correspondence with the N groups of local images according to the first position information, the corresponding relation information and the N groups of local images.
7. A processing device, comprising:
the first receiving module is used for receiving N initial images sent by N camera units, the N initial images correspond to the N camera units one by one, and N is a natural number;
the extraction module is used for determining a preset region corresponding to the parameter to be tested according to the parameter to be tested, extracting the local images of the preset region of each initial image in the N initial images, wherein N groups of local images are total, and storing the corresponding relation information of the N groups of local images and the N camera units and the first position information of the preset region; the corresponding relation information represents the corresponding relation between the N groups of local images and the N shooting units, and the first position information represents the position information of the preset area on the N initial images;
and the sending module is used for sending the corresponding relation information, the first position information and the N groups of local images to a test device.
8. A test apparatus, comprising:
the second receiving module is used for receiving the corresponding relation information, the first position information and the N groups of local images which are sent by the processing equipment; the corresponding relation information represents the corresponding relation between the N groups of local images and N camera units, wherein N is a natural number; the first position information represents the position of a preset area where the local image is located on the initial image; the preset area is an area corresponding to the to-be-tested parameter determined according to the to-be-tested parameter to be tested;
the restoring module is used for restoring and obtaining N restored images which correspond to the N groups of local images one by one according to the first position information and the N groups of local images;
the testing module is used for testing the parameters to be tested of the N restored images to obtain N groups of testing result parameters which are in one-to-one correspondence with the N restored images;
and the determining module is used for determining the test result parameters of each camera unit in the N camera units according to the corresponding relation information and the N groups of test result parameters.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-6 when executing the program.
10. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 6.
CN201810422215.0A 2018-05-04 2018-05-04 Camera module testing method, device and medium Active CN108495125B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810422215.0A CN108495125B (en) 2018-05-04 2018-05-04 Camera module testing method, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810422215.0A CN108495125B (en) 2018-05-04 2018-05-04 Camera module testing method, device and medium

Publications (2)

Publication Number Publication Date
CN108495125A CN108495125A (en) 2018-09-04
CN108495125B true CN108495125B (en) 2020-07-21

Family

ID=63353661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810422215.0A Active CN108495125B (en) 2018-05-04 2018-05-04 Camera module testing method, device and medium

Country Status (1)

Country Link
CN (1) CN108495125B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111787305B (en) * 2019-04-04 2023-02-10 南昌欧菲光电技术有限公司 Electronic device and intelligent manufacturing method thereof
CN113612987B (en) * 2021-07-15 2024-10-01 昆山丘钛光电科技有限公司 Direction gain parameter acquisition method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102413356A (en) * 2011-12-30 2012-04-11 武汉烽火众智数字技术有限责任公司 Detecting system for video definition and detecting method thereof
CN103313032A (en) * 2012-03-12 2013-09-18 三星泰科威株式会社 Method and system for analyzing multi-channel images

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3203607B2 (en) * 1993-04-28 2001-08-27 株式会社日立製作所 Screen defect detection device
JP2011087090A (en) * 2009-10-14 2011-04-28 Panasonic Corp Image processing method, image processing apparatus, and imaging system
CN103067736B (en) * 2012-12-20 2015-04-22 广州视源电子科技股份有限公司 Automatic test system based on character recognition
CN104036490B (en) * 2014-05-13 2017-03-29 重庆大学 Foreground segmentation method suitable for mobile communications network transmission
US9699400B2 (en) * 2014-06-06 2017-07-04 Flir Systems, Inc. Systems and methods for dynamic data management for camera systems
JP2017163228A (en) * 2016-03-07 2017-09-14 パナソニックIpマネジメント株式会社 Surveillance camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102413356A (en) * 2011-12-30 2012-04-11 武汉烽火众智数字技术有限责任公司 Detecting system for video definition and detecting method thereof
CN103313032A (en) * 2012-03-12 2013-09-18 三星泰科威株式会社 Method and system for analyzing multi-channel images

Also Published As

Publication number Publication date
CN108495125A (en) 2018-09-04

Similar Documents

Publication Publication Date Title
CN105635719B (en) Method and apparatus for calibrating image
CN109166156B (en) Camera calibration image generation method, mobile terminal and storage medium
CN111815754A (en) Three-dimensional information determination method, three-dimensional information determination device and terminal equipment
US11790584B2 (en) Image and text typesetting method and related apparatus thereof
CN108090486B (en) Image processing method and device in billiard game
CN108961183B (en) Image processing method, terminal device and computer-readable storage medium
CN110889824A (en) Sample generation method and device, electronic equipment and computer readable storage medium
CN109074497B (en) Identifying activity in a sequence of video images using depth information
CN110163831B (en) Method and device for dynamically displaying object of three-dimensional virtual sand table and terminal equipment
EP3223511A1 (en) Distortion rectification method and terminal
EP3376431B1 (en) Method and apparatus for identifying pupil in image
CN110781823B (en) Screen recording detection method and device, readable medium and electronic equipment
CN108960012B (en) Feature point detection method and device and electronic equipment
CN113391779B (en) Parameter adjusting method, device and equipment for paper-like screen
CN108495125B (en) Camera module testing method, device and medium
CN102902943A (en) Two-dimension code scanning method, processing device and terminal
CN113378958A (en) Automatic labeling method, device, equipment, storage medium and computer program product
JP2019504430A (en) Image processing method and device
CN103955713B (en) A kind of icon-based programming method and apparatus
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN113096043B (en) Image processing method and device, electronic device and storage medium
CN114359160A (en) Screen detection method and device, electronic equipment and storage medium
WO2016007248A1 (en) Storage of depth information in a digital image file
CN108776959B (en) Image processing method and device and terminal equipment
CN113168706A (en) Object position determination in frames of a video stream

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: No.3, Taihong Road, Kunshan high tech Industrial Development Zone, Suzhou, Jiangsu Province, 215300

Patentee after: Kunshan Qiuti Microelectronics Technology Co.,Ltd.

Address before: No.3, Taihong Road, Kunshan high tech Industrial Development Zone, Suzhou, Jiangsu Province, 215300

Patentee before: KUNSHAN Q TECHNOLOGY Co.,Ltd.