CN117857919A - Method for debugging image pickup device, electronic apparatus, and storage medium - Google Patents

Method for debugging image pickup device, electronic apparatus, and storage medium Download PDF

Info

Publication number
CN117857919A
CN117857919A CN202311595723.6A CN202311595723A CN117857919A CN 117857919 A CN117857919 A CN 117857919A CN 202311595723 A CN202311595723 A CN 202311595723A CN 117857919 A CN117857919 A CN 117857919A
Authority
CN
China
Prior art keywords
scene
target image
data
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311595723.6A
Other languages
Chinese (zh)
Inventor
高欣
徐银威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Tianlong Communication Technology Co ltd
Shenzhen Tinno Mobile Technology Co Ltd
Shenzhen Tinno Wireless Technology Co Ltd
Original Assignee
Jiangxi Tianlong Communication Technology Co ltd
Shenzhen Tinno Mobile Technology Co Ltd
Shenzhen Tinno Wireless Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Tianlong Communication Technology Co ltd, Shenzhen Tinno Mobile Technology Co Ltd, Shenzhen Tinno Wireless Technology Co Ltd filed Critical Jiangxi Tianlong Communication Technology Co ltd
Priority to CN202311595723.6A priority Critical patent/CN117857919A/en
Publication of CN117857919A publication Critical patent/CN117857919A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The invention discloses a debugging method of an image pickup device, electronic equipment and a storage medium, wherein the debugging method of the image pickup device comprises the following steps: shooting by a detected camera device to obtain a target image, and calculating the definition of the target image; when the definition of the target image is smaller than the preset definition, performing scene analysis on the target image to obtain a scene of the target image; and adjusting various focusing data of the tested camera device based on the scene of the target image. By the method, the pertinence of the adjustment of the focusing data can be improved, and the automation of the adjustment of the focusing data can be realized, so that the debugging efficiency of the tested camera device is improved, and the participation of manpower in the debugging process is reduced.

Description

Method for debugging image pickup device, electronic apparatus, and storage medium
Technical Field
The invention is applied to the technical field of testing of the camera device, in particular to a debugging method of the camera device, electronic equipment and a storage medium.
Background
With the development of the whole camera technology in the image industry, consumers have higher requirements on the imaging image quality of the terminal, and how to rapidly adjust the imaging image quality of the terminal camera device becomes a focus of the image industry.
The existing debugging scheme is that a shot image is imported into an analysis tool, carried focusing data are analyzed, then a mechanism for mistakenly inputting is manually analyzed, and parameters are improperly set to cause image blurring.
But each blurred image is imported into an analysis tool for analysis, the data output by the finding algorithm is used for determining focusing data to be modified, and the whole debugging process is very dependent on the technical analysis level of a debugger.
Disclosure of Invention
The invention provides a debugging method of an image pickup device, electronic equipment and a storage medium, which are used for solving the problem that the debugging of the image pickup device excessively depends on manual work.
In order to solve the above technical problems, the present invention provides a method for debugging an image capturing apparatus, including: shooting by a detected camera device to obtain a target image, and calculating the definition of the target image; when the definition of the target image is smaller than the preset definition, performing scene analysis on the target image to obtain a scene of the target image; and adjusting various focusing data of the tested camera device based on the scene of the target image.
The step of performing scene analysis on the target image to obtain a scene of the target image comprises the following steps: and acquiring the color brightness and the characteristic data of the target image, and performing scene judgment on the target image based on the color brightness and the characteristic data to obtain the scene of the target image.
The step of judging the scene of the target image based on the color brightness and the characteristic data to obtain the scene of the target image comprises the following steps: judging whether the color brightness meets the preset requirement; when the color brightness meets the preset requirement, determining that the scene of the target image is a sky scene; judging whether the feature data contains portrait feature data or not when the brightness of the color does not meet the preset requirement; when the portrait characteristic data is contained, determining that the scene of the target image is a portrait scene; and when the portrait characteristic data is not included, determining that the scene of the target image is a common scene.
The step of judging whether the color brightness meets the preset requirement comprises the following steps: judging whether the brightness in the color brightness is not less than LV140, whether the component of blue in the color brightness is not less than 200, and whether the components of red and green are respectively not more than 50; when the brightness is not less than LV140, the blue component is not less than 200, the red and green non-components are not more than 50, and the brightness of the color is determined to meet the preset requirement; when the brightness is less than LV140, the blue component is less than 200, the red component is greater than 50, or the green component is greater than 50, it is determined that the color brightness does not meet the preset requirement.
The step of adjusting various focusing data of the tested camera device based on the scene of the target image comprises the following steps: determining adjusted target focus data based on a scene of the target image; and traversing and adjusting the target focusing data within the range of the target focusing data until the target image is clear or traversed after the image capturing device to be tested captures the target image.
Wherein the step of determining the adjusted target focus data based on the scene of the target image includes: when the scene of the target image is a sky scene, determining that the adjusted target focusing data comprises an angle threshold value and a brightness level threshold value; when the scene of the target image is a portrait scene, determining that the adjusted target focusing data comprises a portrait output frame size and an expansion frame size; when the scene of the target image is a common scene, the adjusted target focusing data is determined to comprise the focusing frame size and the scene recognition threshold.
The step of performing scene analysis on the target image to obtain a scene of the target image further comprises the following steps: and carrying out scene recognition on the target image through the trained scene recognition model to obtain the scene of the target image.
Wherein the step of calculating the sharpness of the target image includes: the definition of the target image is calculated and obtained by a definition evaluation function based on gradient, a frequency domain evaluation function, a definition evaluation function based on information entropy or a definition evaluation function based on statistics.
In order to solve the above technical problems, the present invention provides an electronic device, including: the device comprises a memory and a processor which are coupled with each other, wherein the processor is used for executing program instructions stored in the memory so as to realize the debugging method of the image pickup device.
In order to solve the above technical problems, the present invention provides a computer-readable storage medium storing program data executable to implement a debugging method of an image pickup apparatus as any one of the above.
The beneficial effects of the invention are as follows: different from the prior art, the method and the device have the advantages that the target image is obtained through shooting by the tested camera device, and the definition of the target image is calculated; when the definition of the target image is smaller than the preset definition, performing scene analysis on the target image to obtain a scene of the target image; the method has the advantages that various focusing data of the tested camera device are adjusted based on the scene of the target image, so that focusing data to be adjusted are determined based on the scene, the pertinence of the focusing data adjustment can be improved, and the automation of the focusing data adjustment is realized, so that the debugging efficiency of the tested camera device is improved, the participation of manpower in the debugging process is reduced, the manual workload is reduced, the mode of determining the focusing data to be adjusted based on the scene can be suitable for various photographic environments and image processing applications, and the application range of a debugging method of the camera device is improved.
Drawings
Fig. 1 is a flowchart of an embodiment of a method for debugging an image capturing apparatus according to the present invention;
fig. 2 is a flowchart of another embodiment of a method for debugging an image capturing apparatus according to the present invention;
fig. 3 is a flowchart of another embodiment of a method for debugging an image capturing apparatus according to the present invention;
FIG. 4 is a schematic structural diagram of an embodiment of an electronic device according to the present invention;
fig. 5 is a schematic structural diagram of an embodiment of a computer readable storage medium according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and rear … …) are included in the embodiments of the present invention, the directional indications are merely used to explain the relative positional relationship, movement conditions, etc. between the components in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indications are correspondingly changed.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present invention, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of a method for debugging an image capturing apparatus according to the present invention.
Step S11: and shooting by the tested camera device to obtain a target image, and calculating the definition of the target image.
The camera device to be tested is a shooting device in research and development, and the shooting performance of the camera device to be tested may be in a condition needing to be optimized. The camera device to be tested can comprise, but is not limited to, a camera head of an intelligent mobile terminal, a single-lens reflex camera, a monitoring camera, a digital camera and other shooting equipment.
According to the embodiment, the target image is obtained by shooting through the tested image pickup device, and the focusing data of the tested image pickup device is adjusted by analyzing the definition of the target image, so that the focusing performance of the tested image pickup device is adjusted. The target image may include a variety of image types, such as pictures, videos, video frames, and the like.
The definition of the target image can be calculated by adopting a definition evaluation function based on gradient, a frequency domain evaluation function, a definition evaluation function based on information entropy or a definition evaluation function based on statistics; wherein the gradient-based sharpness evaluation function includes, but is not limited to, an energy gradient function, a tenngrad function, a Brenner function, or a Laplacian function; statistical-based sharpness evaluation functions include, but are not limited to, range functions or Vollaths functions; the definition of the target image can also be calculated in a mode of spatial domain parameter variance, entropy and frequency domain modulation transfer function MIF (modulation transfer function); the sharpness of the target image can also be obtained by calculating the maximum value, the minimum value and the average value of the brightness of the pixel points on the target image. The specific calculation method is not limited herein.
Step S12: and when the definition of the target image is smaller than the preset definition, performing scene analysis on the target image to obtain a scene of the target image.
The specific value of the preset definition may be set based on the debug target level, which is not limited herein.
When the definition of the target image is not less than the preset definition, the focusing capability of the current tested camera device is required, and debugging is not needed, so that the focusing data and the running log of the target image can be normally output.
When the definition of the target image is smaller than the preset definition, the focusing capability of the current tested camera device is not satisfied, and debugging is needed, and the method and the device utilize the scene of the target image to determine the focusing parameters of the tested camera device, which need to be debugged.
And performing scene analysis on the target image to obtain a scene of the target image. Wherein, various scenes of the image can be divided in advance. In one specific application scenario, various scenarios of the image may be divided into portrait scenarios and non-portrait scenarios; can also be divided into: daily scenes and night scenes; but also into sky scenes, portrait scenes and ordinary scenes. The specific scene division is not limited herein, and the like.
In a specific application scene, scene recognition can be performed on the target image through the trained scene recognition model, so that the scene of the target image is obtained. In a specific application scenario, the scene of the target image may also be determined by the color brightness of the target image and the feature data. For example, corresponding color brightness anti-counterfeiting and specific characteristic data are determined in advance based on each scene, and scene judgment is performed through range judgment and characteristic comparison. The specific scene determination method is not limited herein.
Step S13: and adjusting various focusing data of the tested camera device based on the scene of the target image.
The focus data biased by different scenes is different. In a specific application scenario, when multiple scenes of the image include a portrait scene and a non-portrait scene, if the scene of the target image is a portrait scene, the size of a portrait frame and the size of an expansion frame in multiple focusing data can be specifically adjusted. If the scene of the target image is a non-portrait scene, the brightness parameter, the acceleration parameter and the angular velocity parameter in various focusing data can be specifically adjusted.
In a specific application scenario, multiple scenes of the image include sky scenes, portrait scenes and common scenes, and if the scene of the target image is a sky scene, the angle threshold and the brightness level threshold in multiple focusing data can be specifically adjusted. If the scene of the target image is a portrait scene, the size of a portrait frame and the size of an expansion frame in various focusing data can be specifically adjusted. If the scene of the target image is a common scene, the size of a focusing frame in various focusing data and a reference recognition threshold can be specifically adjusted. And the like, are not limited herein. The corresponding relation between the scene and the corresponding focusing data can be constructed based on the division of the scene and the actual situation, and the above is only illustrative and not limiting.
In the embodiment, focusing data which needs to be adjusted when the definition of the target image is smaller than the preset definition are related to the scene of the target image, and the focusing data which needs to be adjusted is determined based on the scene, so that the pertinence of focusing data adjustment can be improved, and the automation of focusing data adjustment can be realized, thereby improving the debugging efficiency of the tested camera device, reducing the participation of manual debugging process, reducing the situation that the debugging of the camera device excessively depends on manual work, and reducing the manual work load.
Through the steps, the debugging method of the image pickup device of the embodiment obtains the target image through the image pickup device to be detected, and calculates the definition of the target image; when the definition of the target image is smaller than the preset definition, performing scene analysis on the target image to obtain a scene of the target image; the method has the advantages that various focusing data of the tested camera device are adjusted based on the scene of the target image, so that focusing data to be adjusted are determined based on the scene, the pertinence of the focusing data adjustment can be improved, and the automation of the focusing data adjustment is realized, so that the debugging efficiency of the tested camera device is improved, the participation of manpower in the debugging process is reduced, the manual workload is reduced, the mode of determining the focusing data to be adjusted based on the scene can be suitable for various photographic environments and image processing applications, and the application range of a debugging method of the camera device is improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating another embodiment of a method for debugging an image capturing apparatus according to the present invention.
Step S21: and shooting by the tested camera device to obtain a target image, and calculating the definition of the target image.
In a specific application scenario, the method for debugging the camera device of the embodiment may be executed through the intelligent terminal. The intelligent terminal may include, but is not limited to: the microcomputer and the server, and in addition, the intelligent terminal may further include mobile devices such as a notebook computer and a tablet computer, which are not limited herein.
The intelligent terminal is connected with the tested camera device in a wired or wireless connection mode. The intelligent terminal sends a shooting instruction to the to-be-detected camera device, the to-be-detected camera device shoots a target image based on the shooting instruction and sends the target image to the intelligent terminal, the intelligent terminal calculates the definition of the target image, and obtains adjusted target focusing data based on the scene of the determined target image, the compiled code is burnt into the to-be-detected camera device, and then iterative testing is carried out until the testing is finished or the to-be-detected camera device reaches the standard.
In a specific application scene, after the intelligent terminal controls the tested camera device to shoot the target image, the intelligent terminal can acquire focusing information of the target image and an operation log of a focusing process, wherein the focusing information comprises a focusing frame position, a final focusing position, a focusing mode, a falling focusing mechanism and the like, the focusing mechanism corresponds to the scene, namely one type of scene corresponds to one type of focusing mechanism, and the focusing mechanism comprises focusing data corresponding to the scene. For example: the focusing mechanism of the portrait scene comprises the size of a portrait frame and the size of an expansion frame. The focusing mechanism of the sky scenery comprises an angle threshold value and a brightness level threshold value, and the like, which are not described again. The operation log comprises the proportion of focusing frames, the proportion of human face focusing frames or a motor step size table and the like. After the focusing data of the target image at the moment is acquired, modification and adjustment are convenient to carry out on the focusing data subsequently so as to carry out traversal.
The definition of the target image can be calculated by adopting a definition evaluation function based on gradient, a frequency domain evaluation function, a definition evaluation function based on information entropy or a definition evaluation function based on statistics; wherein the gradient-based sharpness evaluation function includes, but is not limited to, an energy gradient function, a tenngrad function, a Brenner function, or a Laplacian function; statistical-based sharpness evaluation functions include, but are not limited to, range functions or Vollaths functions. The specific calculation method is not limited herein.
Step S22: and when the definition of the target image is smaller than the preset definition, acquiring the color brightness and the characteristic data of the target image, and judging the scene of the target image based on the color brightness and the characteristic data to obtain the scene of the target image.
The specific value of the preset definition may be set based on the debug target level, which is not limited herein.
When the definition of the target image is not less than the preset definition, the focusing capability of the current tested camera device is required, and debugging is not needed, so that the focusing data and the running log of the target image can be normally output.
When the definition of the target image is smaller than the preset definition, which indicates that the focusing capability of the current tested camera device does not meet the requirement and needs to be debugged, the embodiment determines the focusing parameter of the tested camera device which needs to be debugged by utilizing the scene of the target image.
And acquiring the color brightness and the characteristic data of the target image, and performing scene judgment on the target image based on the color brightness and the characteristic data to obtain the scene of the target image. The color brightness can be directly obtained from the attribute data of the target image.
In a specific application scenario, when multiple scenes of an image are divided into sky scenes, human scenes and common scenes, a specific judging method of the scenes may be: judging whether the color brightness meets the preset requirement; and when the color brightness meets the preset requirement, determining that the scene of the target image is a sky scene.
The preset requirements may include brightness and RGB color requirements, specifically, brightness above LV140, blue component not less than 200, and red and green components not more than 50, respectively. I.e., whether the brightness in the color brightness is not less than LV140, and whether the component of blue in the color brightness is not less than 200, and whether the components of red and green are respectively not more than 50; when the brightness is not less than LV140, the blue component is not less than 200, the red and green non-components are not more than 50, and the brightness of the color is determined to meet the preset requirement; when the brightness is less than LV140, the blue component is less than 200, the red component is greater than 50, or the green component is greater than 50, it is determined that the color brightness does not meet the preset requirement.
In other embodiments, the threshold value of the preset required brightness may be LV120, LV130, LV145, LV150, etc., the component threshold value of blue may be 190, 192, 205, 210, 220, etc., and the component threshold value of red and green may be 30, 40, 55, etc. The setting may be specifically performed based on actual conditions, and is not limited herein.
And when the color brightness does not meet the preset requirement, further judging whether the feature data contains the portrait feature data. When the portrait characteristic data is contained, determining that the scene of the target image is a portrait scene; and when the portrait characteristic data is not included, determining that the scene of the target image is a common scene.
The image feature extraction can be performed on the target image to obtain feature data. The image feature extraction method can extract the features through a trained feature extraction model, can also smooth the features in a scale space through a Gaussian blur kernel, and then calculates feature data of the image through local derivative operation. The method can also be based on gray scale, local change of gray scale of the pixel point is utilized to detect, the pixel point with the maximum gray scale change is obtained, derivative of gray scale around the pixel point can be obtained by differential operation, and therefore the position of the pixel characteristic point with the maximum gray scale change is obtained, and characteristic data is obtained.
The image feature data may include, but is not limited to, eye feature data, mouth feature data, hand feature data, ankle feature data, shoulder feature data, etc., and when the feature data includes any image feature data, the scene of the target image is determined to be an image scene; and when the feature data does not contain portrait feature data at all, determining that the scene of the target image is a common scene. In a specific application scenario, the division of the multiple scenes of the image may further include an extreme scene in addition to the three types of the above, and when the scene of the target image is determined to be a normal scene, but the color brightness of the target image exceeds the color brightness range of the normal scene, the scene of the target image is determined to be the extreme scene. If the target image belongs to an extreme scene, the debugging of the scene does not have a reference meaning, the target image is directly output, the debugging is not performed or the focusing information of the target image and the value output in the focusing process are packaged and printed, and a high-level technician is prompted to manually debug according to the packaged information.
Step S23: determining adjusted target focus data based on a scene of the target image; and traversing and adjusting the target focusing data within the range of the target focusing data until the target image is clear or traversed after the image capturing device to be tested captures the target image.
Determining adjusted target focusing data based on a scene of the target image, wherein in a specific application scene, when the scene of the target image is a sky scene, the adjusted target focusing data comprises an angle threshold value and a brightness level threshold value; when the scene of the target image is a portrait scene, determining that the adjusted target focusing data comprises a portrait output frame size and an expansion frame size; when the scene of the target image is a common scene, the adjusted target focusing data is determined to comprise the focusing frame size and the scene recognition threshold. The target focusing data to be adjusted can be automatically determined based on the scene, so that the manual participation is reduced, and the intellectualization of the debugging method is improved.
The adjusting means of the focusing data in this embodiment is as follows: and traversing and adjusting the target focusing data within the range of the target focusing data until the target image is clear or traversed after the image capturing device to be tested captures the target image. The range of the target focus data is set based on the own characteristics of the target focus data, for example: and when the target focusing data is brightness and the brightness range is 0-255, traversing the 256 brightness levels in turn until the image is clear or the traversal is finished. When the target focusing data is the size of the focusing frame, the focusing frame range is within the size range of the target image, and the focusing frame range is sequentially increased until the focusing frame range is the same as the size range of the target image or the image is clear. And the like, without limitation.
After the focusing data is adjusted, the intelligent terminal burns the adjusted focusing data compiling code into the tested camera device based on the adjusted focusing data, and the tested camera device executes the step S21 again so as to carry out loop iteration debugging until the definition of the target image is not less than the preset definition.
Through the steps, the method for debugging the image pickup device of the embodiment comprises the steps of extracting image characteristics, calculating the definition of image pixels, automatically identifying an image scene and based on scene adjustment parameters, and can improve the pertinence of adjustment of focusing data and realize automation of adjustment of the focusing data, so that the debugging efficiency of the image pickup device to be tested is improved, the participation of manpower in a debugging process is reduced, the mode of determining the focusing data to be adjusted based on the scene can be suitable for various photographic environments and image processing applications, and the application range of the method for debugging the image pickup device is improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a debugging method of an image capturing apparatus according to another embodiment of the present invention.
Step S301: and shooting by a tested camera device to obtain a target image.
Step S302: and calculating the definition of the target image.
Step S303: and judging whether the definition of the target image is smaller than a preset definition.
When the definition of the target image is smaller than the preset definition, executing step S304; when the sharpness of the target image is not less than the preset sharpness, step S311 is performed.
Step S304: and acquiring the color brightness and the characteristic data of the target image.
Step S305: whether the scene of the target image is a sky scene is determined based on the color brightness.
When the target image is a sky scene, executing step S306; when the target image is not a sky scene, step S307 is performed.
Step S306: the angle threshold and the brightness level threshold are modified.
Step S307: and judging whether the scene of the target image is a portrait scene or not based on the feature data.
When the target image is a portrait scene, step S308 is executed; when the target image is not a portrait scene, step S309 is performed.
Step S308: and modifying the size of the portrait output frame and expanding the size of the frame.
Step S309: and judging whether the scene of the target image is a common scene or not.
When the target image is a normal scene, executing step S310; when the target image is not a normal scene, step S311 is performed.
Step S310: the focus frame size and scene recognition threshold are modified.
Step S311: and directly outputting the target image without debugging.
Step S312: the compiled code is burnt into the tested camera device.
The details of this embodiment refer to the foregoing embodiment of fig. 2, and are not described herein. The debugging method of the camera device has good expansibility, besides the debugging mode, key focusing information can be output, and workload caused by manual analysis of problems in the debugging process is reduced.
Through the steps, the method for debugging the image pickup device of the embodiment determines the focusing data to be adjusted based on the scene, and can improve the pertinence of adjusting the focusing data and realize the automation of adjusting the focusing data, thereby improving the debugging efficiency of the image pickup device to be tested, reducing the participation of manpower in the debugging process, and being applicable to various photographic environments and image processing applications in a manner of determining the focusing data to be adjusted based on the scene, and improving the application range of the method for debugging the image pickup device.
Based on the same inventive concept, the present invention also provides an electronic device, which can be executed to implement the method for debugging the image capturing apparatus according to any of the above embodiments, referring to fig. 3, fig. 4 is a schematic structural diagram of an embodiment of the electronic device provided by the present invention, where the electronic device includes a processor 41 and a memory 42.
The processor 41 is configured to execute program instructions stored in the memory 42 to implement the steps of the method for debugging any one of the above-described image capturing apparatuses. In one particular implementation scenario, an electronic device may include, but is not limited to: the microcomputer and the server, and the electronic device may also include mobile devices such as a notebook computer and a tablet computer, which are not limited herein.
In particular, the processor 41 is adapted to control itself and the memory 42 to implement the steps of any of the embodiments described above. The processor 41 may also be referred to as a CPU (Central Processing Unit ). The processor 41 may be an integrated circuit chip with signal processing capabilities. The processor 41 may also be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 41 may be commonly implemented by an integrated circuit chip.
According to the scheme, the pertinence of the adjustment of the focusing data can be improved, and the automation of the adjustment of the focusing data can be realized, so that the debugging efficiency of the tested camera device is improved, and the participation of manpower in the debugging process is reduced.
Based on the same inventive concept, the present invention also provides a computer readable storage medium, please refer to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of the computer readable storage medium provided by the present invention. The computer-readable storage medium 50 stores therein at least one program data 51, the program data 51 for implementing any of the methods described above. In one embodiment, the computer readable storage medium 50 includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the several embodiments provided in the present invention, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the invention, in essence or a part contributing to the prior art or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a storage medium.
The foregoing is only the embodiments of the present invention, and therefore, the patent scope of the invention is not limited thereto, and all equivalent structures or equivalent processes using the descriptions of the present invention and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the invention.
The foregoing is only the embodiments of the present invention, and therefore, the patent scope of the invention is not limited thereto, and all equivalent structures or equivalent processes using the descriptions of the present invention and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the invention.

Claims (10)

1. A method of debugging an image pickup apparatus, the method comprising:
shooting by a detected camera device to obtain a target image, and calculating the definition of the target image;
when the definition of the target image is smaller than the preset definition, performing scene analysis on the target image to obtain a scene of the target image;
and adjusting various focusing data of the tested camera device based on the scene of the target image.
2. The method according to claim 1, wherein the step of performing scene analysis on the target image to obtain a scene of the target image includes:
and acquiring the color brightness and the characteristic data of the target image, and judging the scene of the target image based on the color brightness and the characteristic data to obtain the scene of the target image.
3. The method according to claim 2, wherein the step of performing scene determination on the target image based on the color brightness and the feature data to obtain a scene of the target image includes:
judging whether the color brightness meets a preset requirement or not;
when the color brightness meets the preset requirement, determining that the scene of the target image is a sky scene;
judging whether the characteristic data comprise portrait characteristic data or not when the color brightness does not meet the preset requirement;
when the portrait characteristic data is included, determining that the scene of the target image is a portrait scene;
and when the portrait characteristic data is not included, determining that the scene of the target image is a common scene.
4. The debugging method of an image capturing apparatus according to claim 3, wherein the step of judging whether the color brightness satisfies a preset requirement comprises:
judging whether the brightness in the color brightness is not less than LV140, whether the component of blue in the color brightness is not less than 200, and whether the components of red and green are respectively not more than 50;
when the brightness is not less than LV140, the blue component is not less than 200, and the red and green non-components are not more than 50, determining that the color brightness meets the preset requirement;
when the brightness is less than LV140, the blue component is less than 200, the red component is greater than 50 or the green component is greater than 50, it is determined that the color brightness does not meet the preset requirement.
5. The method according to claim 3 or 4, wherein the step of adjusting the plurality of kinds of focus data of the image capturing apparatus under test based on the scene of the target image includes:
determining adjusted target focus data based on a scene of the target image;
and traversing and adjusting the target focusing data within the range of the target focusing data until the target image obtained by shooting of the tested camera device is clear or traversed.
6. The method according to claim 5, wherein the step of determining adjusted target focus data based on a scene of the target image includes:
when the scene of the target image is a sky scene, determining that the adjusted target focusing data comprises an angle threshold value and a brightness level threshold value;
when the scene of the target image is a portrait scene, determining that the adjusted target focusing data comprises portrait output frame size and expansion frame size;
and when the scene of the target image is a common scene, determining that the adjusted target focusing data comprises a focusing frame size and a scene recognition threshold value.
7. The method according to claim 1, wherein the step of performing scene analysis on the target image to obtain a scene of the target image further comprises:
and carrying out scene recognition on the target image through the trained scene recognition model to obtain the scene of the target image.
8. The debugging method of an image capturing apparatus according to claim 1, wherein the step of calculating the sharpness of the target image comprises:
and calculating the definition of the target image based on a gradient definition evaluation function, a frequency domain evaluation function, an information entropy-based definition evaluation function or a statistical definition evaluation function.
9. An electronic device, the electronic device comprising: a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the method of debugging an image pickup apparatus according to any one of claims 1 to 8.
10. A computer-readable storage medium storing program data executable to implement the debugging method of the image capturing apparatus according to any one of claims 1 to 8.
CN202311595723.6A 2023-11-27 2023-11-27 Method for debugging image pickup device, electronic apparatus, and storage medium Pending CN117857919A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311595723.6A CN117857919A (en) 2023-11-27 2023-11-27 Method for debugging image pickup device, electronic apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311595723.6A CN117857919A (en) 2023-11-27 2023-11-27 Method for debugging image pickup device, electronic apparatus, and storage medium

Publications (1)

Publication Number Publication Date
CN117857919A true CN117857919A (en) 2024-04-09

Family

ID=90547085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311595723.6A Pending CN117857919A (en) 2023-11-27 2023-11-27 Method for debugging image pickup device, electronic apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN117857919A (en)

Similar Documents

Publication Publication Date Title
EP3496383A1 (en) Image processing method, apparatus and device
CN107635098B (en) High dynamic range images noise remove method, device and equipment
KR101662846B1 (en) Apparatus and method for generating bokeh in out-of-focus shooting
CN109729332B (en) Automatic white balance correction method and system
CN104394392B (en) A kind of white balance adjusting method, device and terminal
WO2017215527A1 (en) Hdr scenario detection method, device, and computer storage medium
JP2020529086A (en) Methods and equipment for blurring preview photos and storage media
US11074742B2 (en) Image processing apparatus, image processing method, and storage medium
US10469812B2 (en) Projection display system, information processing apparatus, information processing method, and storage medium therefor
CN107018407B (en) Information processing device, evaluation chart, evaluation system, and performance evaluation method
US20220174222A1 (en) Method for marking focused pixel, electronic device, storage medium, and chip
CN104954627B (en) A kind of information processing method and electronic equipment
WO2020119454A1 (en) Method and apparatus for color reproduction of image
CN110769225B (en) Projection area obtaining method based on curtain and projection device
CN110740266A (en) Image frame selection method and device, storage medium and electronic equipment
CN107491718A (en) The method that human hand Face Detection is carried out under different lightness environment
CN111917986A (en) Image processing method, medium thereof, and electronic device
CN111160340B (en) Moving object detection method and device, storage medium and terminal equipment
CN117857919A (en) Method for debugging image pickup device, electronic apparatus, and storage medium
CN111222419A (en) Object identification method, robot and computer readable storage medium
CN108781280B (en) Test method, test device and terminal
CN113706429B (en) Image processing method, device, electronic equipment and storage medium
CN105141857A (en) Image processing method and device
CN117857774A (en) Test method, electronic device and storage medium
CN115484447B (en) Projection method, projection system and projector based on high color gamut adjustment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination