CN116128782A - Image generation method, device, equipment and storage medium - Google Patents

Image generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN116128782A
CN116128782A CN202310417262.7A CN202310417262A CN116128782A CN 116128782 A CN116128782 A CN 116128782A CN 202310417262 A CN202310417262 A CN 202310417262A CN 116128782 A CN116128782 A CN 116128782A
Authority
CN
China
Prior art keywords
image
target
fused
images
object distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310417262.7A
Other languages
Chinese (zh)
Inventor
侍海东
陈静
毕文波
何银军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Suyingshi Image Software Technology Co ltd
Original Assignee
Suzhou Suyingshi Image Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Suyingshi Image Software Technology Co ltd filed Critical Suzhou Suyingshi Image Software Technology Co ltd
Priority to CN202310417262.7A priority Critical patent/CN116128782A/en
Publication of CN116128782A publication Critical patent/CN116128782A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses an image generation method, an image generation device, image generation equipment and a storage medium. The method comprises the following steps: in response to an image acquisition request, determining a target resolution of a target acquired image and an actual object distance between image acquisition equipment and a target object, and determining an object distance change range according to the actual object distance; according to the target resolution, a corresponding CMOS sensor is adopted to control a liquid lens to perform continuous zooming and image acquisition in the range of object distance variation so as to obtain at least two images to be fused; preprocessing at least two images to be fused by adopting a preset FPGA control unit to generate a target split image; based on a preset graphic processor GPU, processing at least two images to be fused in parallel, and combining the target split images to generate a target fusion image. According to the technical scheme, the quality of the generated fusion image can be effectively improved, the fusion speed of the image is improved, and the obtained high-depth-of-field high-definition image is obtained.

Description

Image generation method, device, equipment and storage medium
Technical Field
The present invention relates to the field of image technologies, and in particular, to an image generating method, apparatus, device, and storage medium.
Background
With the continuous development of image technology, in automatic image recognition detection, for some image acquisition of specific scenes, such as image acquisition of scenes with high and low, far and near change of code reading, character detection, PCB detection and the like, problems of poor image quality and the like are easily caused by unclear focusing and overlarge object distance change, so that the efficiency of subsequent image processing is affected.
Therefore, how to effectively use a liquid lens to collect continuous focusing images and use a graphics processor to rapidly and accurately process images with different focal lengths to generate high-quality fusion images is a problem to be solved in the present day.
Disclosure of Invention
The invention provides an image generation method, an image generation device and a storage medium, which can effectively improve the quality of a generated fusion image, improve the fusion speed of the image and obtain a high-depth-of-field high-definition image.
According to an aspect of the present invention, there is provided an image generation method including:
in response to an image acquisition request, determining a target resolution of a target acquired image and an actual object distance between image acquisition equipment and a target object, and determining an object distance change range according to the actual object distance;
According to the target resolution, a corresponding CMOS sensor is adopted to control a liquid lens to perform continuous zooming and image acquisition in the range of object distance variation so as to obtain at least two images to be fused;
preprocessing at least two images to be fused by adopting a preset FPGA control unit to generate a target split image;
based on a preset graphic processor GPU, processing at least two images to be fused in parallel, and combining the target split images to generate a target fusion image.
According to another aspect of the present invention, there is provided an image generating apparatus including:
the determining module is used for responding to the image acquisition request, determining the target resolution of the target acquired image and the actual object distance between the image acquisition equipment and the target object, and determining the object distance change range according to the actual object distance;
the acquisition module is used for controlling the liquid lens to carry out continuous zooming and image acquisition within the range of object distance variation by adopting a corresponding CMOS sensor according to the target resolution so as to acquire at least two images to be fused;
the preprocessing module is used for preprocessing at least two images to be fused by adopting a preset FPGA control unit to generate a target split image;
The generating module is used for processing at least two images to be fused in parallel based on a preset Graphic Processor (GPU), and generating a target fusion image by combining the target split image.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the image generation method according to any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to execute the image generation method according to any one of the embodiments of the present invention.
According to the technical scheme, the target resolution of a target acquired image and the actual object distance between the image acquisition equipment and a target object are determined in response to an image acquisition request, and the object distance change range is determined according to the actual object distance; according to the target resolution, a corresponding CMOS sensor is adopted to control a liquid lens to perform continuous zooming and image acquisition in the range of object distance variation so as to obtain at least two images to be fused; preprocessing at least two images to be fused by adopting a preset FPGA control unit to generate a target split image; based on a preset graphic processor GPU, processing at least two images to be fused in parallel, and combining the target split images to generate a target fusion image. The liquid lens is adopted to collect continuous focusing images, and the image processor is utilized to rapidly and accurately process the images under different focal lengths, so that the quality of the generated fusion image can be effectively improved, the high-depth-of-field high-definition high-quality image can be obtained, and the fusion speed of the image can be effectively improved by utilizing the parallel processing characteristic of the GPU, so that the fusion image can be rapidly obtained.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image generating method according to a first embodiment of the present invention;
fig. 2 is a flowchart of an image generating method according to a second embodiment of the present invention;
fig. 3 is a block diagram of an image generating apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," "target," "candidate," "alternative," and the like in the description and claims of the invention and in the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that in the related art, an industrial camera, a common lens and a visual system of a PC are often adopted to generate a detection image, and this method has a better detection advantage only in a scene with a relatively small object distance change, so that it is difficult to solve the problem of wide focusing requirement range caused by a large object distance change.
Example 1
Fig. 1 is a flowchart of an image generating method according to a first embodiment of the present invention; the present embodiment is applicable to a case where images of different focal lengths are acquired by using a liquid lens and subjected to fusion processing to generate a high-quality image of high depth of field and high definition, and the method may be performed by an image generating apparatus, which may be implemented in hardware and/or software, and the image generating apparatus may be configured in an electronic device, such as an image generating system or a vision system, and executed by a processor of the image generating system or the vision system. As shown in fig. 1, the image generation method includes:
s101, in response to an image acquisition request, determining target resolution of a target acquired image and actual object distance between image acquisition equipment and a target object, and determining an object distance change range according to the actual object distance.
The image acquisition request refers to a request for acquiring an image of a target including a target object in a scene where the image acquisition device is located. The image acquisition request may be a request issued by the relevant device to the vision system or the image generation system at the time of code reading, character detection, PCB board detection. The target acquisition image refers to an image corresponding to the image acquisition request, and the target acquisition image is a target fusion image finally generated by the method. The target acquisition image may be, for example, an image of a PCB board. The target resolution refers to the resolution requirements that the target needs to meet to acquire an image. Different target resolutions correspond to different CMOS (Complementary Metal Oxide Semiconductor ) sensors. The object refers to an object located in a scene where the image capturing device is located and capturing an image, for example, the object may be a PCB board. The actual object distance refers to the actual distance between the image acquisition device and the target object. The object distance change range refers to a distance range between an image acquisition device and a target object, which needs to be ensured when an image of the target object is acquired, and also can be a distance range between a liquid lens on the image acquisition device and the target object.
Optionally, if the processor of the vision system detects an image detection instruction sent by a related person, the processor may consider that an image acquisition request is detected, and in response to the image acquisition request, the processor may analyze the image acquisition request and extract resolution requirement information contained in the image acquisition request, so as to determine a target resolution of the target acquired image.
Alternatively, the actual object distance between the preset image acquisition device and the target object may be directly acquired, the difference between the actual object distance and the deviation value is determined as the lower boundary of the object distance variation range based on the preset deviation value, and the sum of the actual object distance and the deviation value is determined as the upper boundary of the object distance variation range, that is, the object distance variation range is determined.
It should be noted that the sampling speed of the image acquisition device may be 290Fps, so that the vision system is compatible with CMOS sensors with various different resolutions, and the resolution types of the generated images can be enriched, so as to meet different requirements on the image resolution under different scenes.
S102, according to the target resolution, a corresponding CMOS sensor is adopted to control the liquid lens to perform continuous zooming and image acquisition within the range of object distance variation so as to acquire at least two images to be fused.
The images to be fused are images which are acquired by the liquid lens under different focal lengths and contain the target object, and each image to be fused is segmented image data of the whole target object corresponding to the whole image.
Optionally, the determining the CMOS sensor corresponding to the target resolution based on a one-to-one correspondence between the preset resolution and the CMOS sensor, and performing image acquisition by using the corresponding CMOS sensor, specifically, controlling the liquid lens to perform continuous zooming and image acquisition within the range of object distance variation includes: and controlling the liquid lens to continuously zoom in the object distance change range by adjusting the image acquisition equipment to be at different voltage values, and carrying out image acquisition after each zooming.
Optionally, a plurality of CMOS sensors supporting different resolutions may be configured in advance in an image acquisition unit of the vision system, so that image acquisition can be performed by using corresponding CMOS sensors according to the target resolution. By the mode, the circuit board can be prevented from being replaced, and resource waste caused by replacement of equipment elements is reduced.
Optionally, the liquid lens can be controlled to perform continuous zooming and image acquisition within the range of object distance variation, so that a plurality of images with different focal lengths can be acquired, namely at least two images to be fused are acquired.
Optionally, the controlling the liquid lens to continuously zoom in the object distance variation range by adjusting the image capturing device to be at different voltage values includes: determining an initial focusing surface according to the median value of the object distance variation range, and adjusting the liquid lens focusing surface to be positioned at the initial focusing surface; and setting up and down focus offset of the initial focus plane according to the relation between the object distance change range and the initial focus plane, and controlling the continuous zooming of the liquid lens in the object distance change range according to the up and down focus offset.
Optionally, the position corresponding to the median of the object distance variation range may be used as the position of the initial focusing surface of the liquid lens, further, the deviation values of the initial focusing surface position from the upper and lower boundaries of the object distance variation range are set as the upper focusing offset and the lower focusing offset of the initial focusing surface respectively, and finally, under the condition that the focusing surface of the liquid lens does not exceed the upper and lower focusing offset ranges, the liquid lens is controlled to perform focusing surface offset based on a preset time period, that is, continuous zooming of the liquid lens in the object distance variation range is controlled.
It should be noted that, because the object distance between the object and the image acquisition device varies greatly when the code is read, the character is detected, and the PCB board detects the scene, the focusing range required at this time is also wider, and the liquid lens is adopted to quickly zoom, so as to ensure the definition of the acquired image as much as possible.
Optionally, after the processor of the vision system controls the image acquisition unit to acquire at least two images to be fused, the image acquisition unit may send all the images to be fused to the FPGA control unit of the vision system.
S103, preprocessing at least two images to be fused by adopting a preset FPGA control unit to generate a target split image.
Optionally, a preset FPGA control unit is adopted to preprocess at least two images to be fused, and a target split image is generated, including: determining a clear region in an image to be fused by adopting a preset FPGA control unit, and based on an ROI clipping technology, clipping the clear region of the image to be fused to determine at least two clipping images; generating an original split image according to at least two clipping images, and performing Gaussian filtering processing and absolute difference post-processing on the original split image to generate a target split image.
Optionally, a preset FPGA control unit may be adopted, a pre-trained target detection model is invoked, target detection is performed on each image to be fused, an image region in which a target can be detected in the image to be fused is determined to be a clear region according to a target detection result, and the clear region is cut out from the image to be fused based on an ROI cutting technology, so as to obtain at least two cut images.
Optionally, the original split image may be processed by setting a filter kernel, performing convolution processing and eliminating gaussian noise based on a preset rule, further performing difference and absolute value taking on the processed original split image and the original split image, that is, performing absolute difference post-processing, and finally integrating the original split image and the processed original split image according to a relation between an absolute value result and a preset absolute value range, so as to generate a target split image.
Optionally, the determined at least two clipping images may be integrated based on a preset rule to generate an original split image, and it should be noted that, the original split image obtained by such a method has higher definition and can more fully reflect the condition of the target object.
S104, processing at least two images to be fused in parallel based on a preset Graphic Processor (GPU), and combining the target split images to generate a target fusion image.
Wherein the graphics processor (graphics processing unit, GPU) is a processor preset in the vision system for performing image processing.
Optionally, filtering processing and key region marking can be performed on each image to be fused in parallel based on a plurality of kernels of the graphics processor GPU, and further, a target fusion image is generated according to marking results of each image to be fused and the target split image.
By utilizing the characteristic that the GPU can process in parallel, imaging pictures under different focal lengths can be processed in parallel at high speed, and fusion images with large depth of field and high definition can be output quickly.
Optionally, based on a preset graphics processor GPU, processing at least two images to be fused in parallel, and combining the target split images to generate a target fusion image, including: based on a preset Graphic Processor (GPU), respectively performing guide filtering processing and key region marking on at least two images to be fused in parallel; preliminarily synthesizing a fusion image according to the corresponding relation between the marking area of each image to be fused and the target split image; and performing secondary guide filtering processing and addition operation on the primary synthesized fusion image to obtain a target fusion image. Wherein the target fusion image is a more accurate image than the initially synthesized fusion image.
The manner of the guided filtering process may be, for example: and aiming at each image to be fused, taking the image to be fused as a guide image, taking the target split image subjected to absolute difference post-processing as an input image, calculating linear correlation factors by calculating the mean value and the variance of the guide image and the input image, and finally determining a guide filtering output image after guide filtering, namely the image to be fused by utilizing the linear correlation factors and the guide image.
For example, if the steering filtering uses the local linear function expression on the steering graph to implement different linear transformations, and if I is a pixel matrix corresponding to the steering graph, P is a pixel matrix corresponding to the input image, and Q is a pixel matrix corresponding to the steering filtering output image, the average value of the steering graph and the input image may be calculated based on a preset calculation formula, and in particular, the average value of the steering graph I is expressed as
Figure SMS_1
Guide graph IThe variance is expressed as +.>
Figure SMS_2
The mean value of the input image P is denoted +.>
Figure SMS_3
The first linear correlation factor a and the second linear correlation factor b may be determined by the following formula:
Figure SMS_4
further, based on the first linear correlation factor a and the second linear correlation factor b, the average value of the linear correlation factors a and b of the overlapping windows corresponding to the guide image and the input image can be calculated
Figure SMS_5
And->
Figure SMS_6
Finally, the guided filtered output image Q may be determined according to the following formula: />
Figure SMS_7
Illustratively, the principle of critical area marking may be: and marking the pixel positions of the clear outline pixel regions (namely clear regions) in the target split images in the images to be fused by using the sequence index values corresponding to the images to be fused.
Optionally, for each image to be fused, guiding filtering processing may be first performed, a key area of the image subjected to guiding filtering processing may be further determined based on a preset rule, and marking is performed, after the key area is marked, a correspondence between the marked area (i.e., the key area) and the target split image in each image to be fused may be determined, and the fused image may be primarily synthesized through analysis of the correspondence.
Optionally, determining the correspondence between the marker region of the image to be fused and the target split image includes: and carrying out minimum connected region processing and distance transformation on each image to be fused, and determining the corresponding relation between the marking region of the image to be fused and the target split image according to the processing result.
Optionally, all outlines and corresponding matrixes of each image to be fused can be calculated, smaller interference areas are filtered out according to a set threshold value, and the reserved pixel communication areas are filled to obtain a first marked image A, namely the minimum communication area processing is carried out.
Alternatively, the distance from each pixel in the image to be fused to the nearest zero pixel can be calculated through distance transformation, and compared with the distance data of the processed target split image, the corresponding relation between the marking area of the image to be fused and the target split image is determined.
Alternatively, the smaller pixel region after the distance conversion can be obtained according to the corresponding relation between the marking region of each image to be fused and the target split image to be marked as the second marking image B, and the first marking image A and the second marking image B are sequentially circulated, so that the positions of the pixel regions corresponding to the clear pixel regions in each image to be fused in the fused image are marked, the image generated after the first marking image A and the second marking image B are subjected to the AND operation is determined as the fusion image which is preliminarily synthesized, namely, the fusion image is preliminarily synthesized according to the corresponding relation between the marking region of each image to be fused and the target split image.
Optionally, after the fusion image is primarily synthesized, secondary guiding filtering processing and adding operation can be performed on the primarily synthesized fusion image to obtain a target fusion image, specifically, the secondary guiding filtering can take the primarily synthesized fusion image as a guiding image, each image data subjected to the filtering processing of the minimum connected region is taken as an input image, and the calculation principle is similar to that of the first guiding filtering; and further carrying out summation operation (namely addition operation) on the plurality of output images of the secondary guide filtering, calculating the weight value of the pixels in each image in the fused image, and carrying out addition operation on the pixels according to the weight value to obtain the target fused image.
Optionally, after the target fusion image is generated, a preset tera-net port can be adopted to transmit the generated target fusion image to external equipment, so that response to the image acquisition request is realized.
According to the technical scheme, the target resolution of a target acquired image and the actual object distance between the image acquisition equipment and a target object are determined in response to an image acquisition request, and the object distance change range is determined according to the actual object distance; according to the target resolution, a corresponding CMOS sensor is adopted to control a liquid lens to perform continuous zooming and image acquisition in the range of object distance variation so as to obtain at least two images to be fused; preprocessing at least two images to be fused by adopting a preset FPGA control unit to generate a target split image; based on a preset graphic processor GPU, processing at least two images to be fused in parallel, and combining the target split images to generate a target fusion image. The liquid lens is adopted to collect continuous focusing images, and the image processor is utilized to rapidly and accurately process the images under different focal lengths, so that the quality of the generated fusion image can be effectively improved, the high-depth-of-field high-definition high-quality image can be obtained, and the fusion speed of the image can be effectively improved by utilizing the parallel processing characteristic of the GPU, so that the fusion image can be rapidly obtained.
Example two
Fig. 2 is a flowchart of an image generating method according to a second embodiment of the present invention; based on the above embodiments, the present embodiment proposes a preferred example of generating a high-quality fusion image by module interaction such as an image acquisition unit, a processor, an FPGA (Field Programmable Gate Array ) control unit, a zoom control unit, an NVMe (non-volatile memory Express) storage unit, a Type-C (USB Type-C) port, a Micro USB port, a tera-network port, and a fusion image output of a vision system. The processor comprises a CPU processor and a GPU processor. The Type-C port can be used for connecting external equipment, the Micro-USB port is used for program update burning, and the NVMe storage unit is used for storing information such as programs and image files.
As illustrated in fig. 2, the method comprises the following steps:
in response to the image acquisition request, the processor can set an object distance change range and a time change coefficient by utilizing the zoom control unit according to the object distance change range, adjust a voltage gear and change the curvature of a liquid interface in the lens so as to enable the lens to continuously zoom in the object distance change range; after each zooming, sending a zooming execution signal to the image acquisition unit;
The image acquisition unit starts to acquire image data after receiving a zooming execution signal sent by the processor, and the corresponding CMOS sensor transmits a large amount of acquired data to the FPGA (Field Programmable Gate Array ) through LVDS (Low-Voltage Differential Signaling, low voltage differential signal).
After receiving the block data from the COMS sensor, the FPGA cuts ROI (Region of Interest) according to the set requirement, and the cut data is put into a DDR (Double Data Rate SDRAM, double rate synchronous dynamic random access memory) for caching and transferring; meanwhile, the FPGA performs image stitching on DDR data, the stitched data is subjected to Gauss filtering (namely Gaussian filtering) and absolute difference post-processing operation, and original image data (acquired images with different focal lengths) and target image data subjected to absolute difference processing (namely target stitching images) are subjected to image compression processing and transmitted to a CPU (central processing unit) through PCIE (peripheral component interconnect express, high-speed serial computer expansion bus standard).
After the CPU acquires the stored data, the data is transmitted to the GPU, and the GPU performs concurrent computation by utilizing a plurality of cores. Respectively carrying out guide filtering and key region marking on each image data of original image data, carrying out minimum connected region processing and distance transformation on the marked image data, determining the corresponding relation between a marked region and a target image data region, and preliminarily synthesizing a fusion image;
Performing second guiding filtering, difference comparison and addition operation on the preliminarily synthesized fusion image, so as to obtain an accurate fusion image, and finally, copying the fused image to a storage unit in a CPU by the GPU; the processor transmits the fused image to the external device through the tera-portal.
It should be noted that, by modularizing and integrally developing the vision system, the complexity and development difficulty of the vision system can be effectively reduced, and the project cost can be saved.
The vision system provided by the invention can effectively solve the problem of image quality of unclear focusing caused by object distance change by controlling multi-focal-length image data acquisition, carrying out high-speed parallel processing and fusion by utilizing the GPU processor and finally rapidly outputting a high-definition fusion image by adopting a tera-meganetwork port.
Example III
Fig. 3 is a block diagram of an image generating apparatus according to a third embodiment of the present invention; the image generating device provided by the embodiment of the invention can be suitable for the situation that images with different focal lengths are acquired by adopting a liquid lens and are subjected to fusion processing so as to generate high-quality images with high depth of field and high definition, and can be realized in a hardware and/or software mode and configured in equipment with specific image generating functions, such as an image generating system or a vision system, and executed by a processor of the image generating system or the vision system. As shown in fig. 3, the apparatus specifically includes:
A determining module 301, configured to determine a target resolution of a target acquired image and an actual object distance between the image acquisition device and a target object in response to the image acquisition request, and determine an object distance change range according to the actual object distance;
the acquisition module 302 is configured to control, according to a target resolution, the liquid lens to perform continuous zooming and image acquisition within a range of object distance variation by using a corresponding CMOS sensor, so as to acquire at least two images to be fused;
the preprocessing module 303 is configured to preprocess at least two images to be fused by using a preset FPGA control unit, and generate a target split image;
the generating module 304 is configured to process at least two images to be fused in parallel based on a preset graphics processor GPU, and combine the target split image to generate a target fusion image.
According to the technical scheme, the target resolution of a target acquired image and the actual object distance between the image acquisition equipment and a target object are determined in response to an image acquisition request, and the object distance change range is determined according to the actual object distance; according to the target resolution, a corresponding CMOS sensor is adopted to control a liquid lens to perform continuous zooming and image acquisition in the range of object distance variation so as to obtain at least two images to be fused; preprocessing at least two images to be fused by adopting a preset FPGA control unit to generate a target split image; based on a preset graphic processor GPU, processing at least two images to be fused in parallel, and combining the target split images to generate a target fusion image. The liquid lens is adopted to collect continuous focusing images, and the image processor is utilized to rapidly and accurately process the images under different focal lengths, so that the quality of the generated fusion image can be effectively improved, the high-depth-of-field high-definition high-quality image can be obtained, and the fusion speed of the image can be effectively improved by utilizing the parallel processing characteristic of the GPU, so that the fusion image can be rapidly obtained.
Further, the acquiring module 302 may include:
the acquisition unit is used for controlling the liquid lens to continuously zoom in the object distance change range by adjusting the image acquisition equipment to be in different voltage values, and carrying out image acquisition after each zooming.
Further, the acquisition unit is specifically configured to:
determining an initial focusing surface according to the median value of the object distance variation range, and adjusting the liquid lens focusing surface to be positioned at the initial focusing surface;
and setting up and down focus offset of the initial focus plane according to the relation between the object distance change range and the initial focus plane, and controlling the continuous zooming of the liquid lens in the object distance change range according to the up and down focus offset.
Further, the preprocessing module 303 is specifically configured to:
determining a clear region in an image to be fused by adopting a preset FPGA control unit, and based on an ROI clipping technology, clipping the clear region of the image to be fused to determine at least two clipping images;
generating an original split image according to at least two clipping images, and performing Gaussian filtering processing and absolute difference post-processing on the original split image to generate a target split image.
Further, the generating module 304 is specifically configured to:
Based on a preset Graphic Processor (GPU), respectively performing guide filtering processing and key region marking on at least two images to be fused in parallel;
preliminarily synthesizing a fusion image according to the corresponding relation between the marking area of each image to be fused and the target split image;
and performing secondary guide filtering processing and addition operation on the primary synthesized fusion image to obtain a target fusion image.
Further, the generating module 304 is further configured to:
and carrying out minimum connected region processing and distance transformation on each image to be fused, and determining the corresponding relation between the marking region of the image to be fused and the target split image according to the processing result.
Further, the device is also used for:
and transmitting the generated target fusion image to external equipment by adopting a preset tera-network port, so as to realize the response to the image acquisition request.
Example IV
Fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention. Fig. 4 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 4, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the respective methods and processes described above, such as an image generation method.
In some embodiments, the image generation method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the image generation method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the image generation method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. An image generation method, comprising:
in response to an image acquisition request, determining a target resolution of a target acquired image and an actual object distance between image acquisition equipment and a target object, and determining an object distance change range according to the actual object distance;
according to the target resolution, a corresponding CMOS sensor is adopted to control a liquid lens to perform continuous zooming and image acquisition in the range of object distance variation so as to obtain at least two images to be fused;
Preprocessing at least two images to be fused by adopting a preset FPGA control unit to generate a target split image;
based on a preset graphic processor GPU, processing at least two images to be fused in parallel, and combining the target split images to generate a target fusion image.
2. The method of claim 1, wherein controlling the liquid lens for continuous zooming and image capturing over a range of object distances comprises:
and controlling the liquid lens to continuously zoom in the object distance change range by adjusting the image acquisition equipment to be at different voltage values, and carrying out image acquisition after each zooming.
3. The method of claim 2, wherein controlling the continuous zooming of the liquid lens in the object distance variation range by adjusting the image capturing device to be at different voltage values comprises:
determining an initial focusing surface according to the median value of the object distance variation range, and adjusting the liquid lens focusing surface to be positioned at the initial focusing surface;
and setting up and down focus offset of the initial focus plane according to the relation between the object distance change range and the initial focus plane, and controlling the continuous zooming of the liquid lens in the object distance change range according to the up and down focus offset.
4. The method of claim 1, wherein preprocessing at least two images to be fused using a preset FPGA control unit to generate a target split image comprises:
determining a clear region in an image to be fused by adopting a preset FPGA control unit, and based on an ROI clipping technology, clipping the clear region of the image to be fused to determine at least two clipping images;
generating an original split image according to at least two clipping images, and performing Gaussian filtering processing and absolute difference post-processing on the original split image to generate a target split image.
5. The method of claim 1, wherein processing at least two images to be fused in parallel based on a preset graphics processor GPU and combining the target split images to generate a target fusion image comprises:
based on a preset Graphic Processor (GPU), respectively performing guide filtering processing and key region marking on at least two images to be fused in parallel;
preliminarily synthesizing a fusion image according to the corresponding relation between the marking area of each image to be fused and the target split image;
and performing secondary guide filtering processing and addition operation on the primary synthesized fusion image to obtain a target fusion image.
6. The method of claim 5, wherein determining the correspondence of the marked region of the image to be fused to the object stitching image comprises:
and carrying out minimum connected region processing and distance transformation on each image to be fused, and determining the corresponding relation between the marking region of the image to be fused and the target split image according to the processing result.
7. The method as recited in claim 1, further comprising:
and transmitting the generated target fusion image to external equipment by adopting a preset tera-network port, so as to realize the response to the image acquisition request.
8. An image generating apparatus, comprising:
the determining module is used for responding to the image acquisition request, determining the target resolution of the target acquired image and the actual object distance between the image acquisition equipment and the target object, and determining the object distance change range according to the actual object distance;
the acquisition module is used for controlling the liquid lens to carry out continuous zooming and image acquisition within the range of object distance variation by adopting a corresponding CMOS sensor according to the target resolution so as to acquire at least two images to be fused;
the preprocessing module is used for preprocessing at least two images to be fused by adopting a preset FPGA control unit to generate a target split image;
The generating module is used for processing at least two images to be fused in parallel based on a preset Graphic Processor (GPU), and generating a target fusion image by combining the target split image.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the image generation method of any one of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores computer instructions for causing a processor to implement the image generation method of any one of claims 1-7 when executed.
CN202310417262.7A 2023-04-19 2023-04-19 Image generation method, device, equipment and storage medium Pending CN116128782A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310417262.7A CN116128782A (en) 2023-04-19 2023-04-19 Image generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310417262.7A CN116128782A (en) 2023-04-19 2023-04-19 Image generation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116128782A true CN116128782A (en) 2023-05-16

Family

ID=86306663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310417262.7A Pending CN116128782A (en) 2023-04-19 2023-04-19 Image generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116128782A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108200324A (en) * 2018-02-05 2018-06-22 湖南师范大学 A kind of imaging system and imaging method based on zoom lens
CN109547708A (en) * 2018-12-04 2019-03-29 中国航空工业集团公司西安航空计算技术研究所 A kind of synthetic vision image processing system
CN111988494A (en) * 2019-05-22 2020-11-24 电子科技大学 Image acquisition method and device and extended depth of field image imaging method and device
CN112529951A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Method and device for acquiring extended depth of field image and electronic equipment
CN113099092A (en) * 2021-04-09 2021-07-09 凌云光技术股份有限公司 Design method of high-performance imaging system
CN114445315A (en) * 2022-01-29 2022-05-06 维沃移动通信有限公司 Image quality enhancement method and electronic device
CN114782435A (en) * 2022-06-20 2022-07-22 武汉精立电子技术有限公司 Image splicing method for random texture scene and application thereof
CN114820488A (en) * 2022-04-11 2022-07-29 科宝智慧医疗科技(上海)有限公司 Sample component analysis method, device, equipment and storage medium
CN115035481A (en) * 2022-06-30 2022-09-09 广东电网有限责任公司 Image object distance fusion method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108200324A (en) * 2018-02-05 2018-06-22 湖南师范大学 A kind of imaging system and imaging method based on zoom lens
CN109547708A (en) * 2018-12-04 2019-03-29 中国航空工业集团公司西安航空计算技术研究所 A kind of synthetic vision image processing system
CN111988494A (en) * 2019-05-22 2020-11-24 电子科技大学 Image acquisition method and device and extended depth of field image imaging method and device
CN112529951A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Method and device for acquiring extended depth of field image and electronic equipment
CN113099092A (en) * 2021-04-09 2021-07-09 凌云光技术股份有限公司 Design method of high-performance imaging system
CN114445315A (en) * 2022-01-29 2022-05-06 维沃移动通信有限公司 Image quality enhancement method and electronic device
CN114820488A (en) * 2022-04-11 2022-07-29 科宝智慧医疗科技(上海)有限公司 Sample component analysis method, device, equipment and storage medium
CN114782435A (en) * 2022-06-20 2022-07-22 武汉精立电子技术有限公司 Image splicing method for random texture scene and application thereof
CN115035481A (en) * 2022-06-30 2022-09-09 广东电网有限责任公司 Image object distance fusion method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
JP7003238B2 (en) Image processing methods, devices, and devices
EP3757890A1 (en) Method and device for image processing, method and device for training object detection model
CN110248096B (en) Focusing method and device, electronic equipment and computer readable storage medium
US10580140B2 (en) Method and system of real-time image segmentation for image processing
WO2020259179A1 (en) Focusing method, electronic device, and computer readable storage medium
WO2019105214A1 (en) Image blurring method and apparatus, mobile terminal and storage medium
EP3480784B1 (en) Image processing method, and device
WO2021022983A1 (en) Image processing method and apparatus, electronic device and computer-readable storage medium
US8224069B2 (en) Image processing apparatus, image matching method, and computer-readable recording medium
JP2017520050A (en) Local adaptive histogram flattening
JP2015231220A (en) Image processing apparatus, imaging device, image processing method, imaging method and program
CN113129241B (en) Image processing method and device, computer readable medium and electronic equipment
CN107622497B (en) Image cropping method and device, computer readable storage medium and computer equipment
WO2020119467A1 (en) High-precision dense depth image generation method and device
CN110766706A (en) Image fusion method and device, terminal equipment and storage medium
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN114255177B (en) Exposure control method, device, equipment and storage medium in imaging
CN116128782A (en) Image generation method, device, equipment and storage medium
CN111866493B (en) Image correction method, device and equipment based on head-mounted display equipment
CN109328459B (en) Intelligent terminal, 3D imaging method thereof and 3D imaging system
CN113011328A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113450391A (en) Method and equipment for generating depth map
CN112804451B (en) Method and system for photographing by utilizing multiple cameras and mobile device
CN115037867B (en) Shooting method, shooting device, computer readable storage medium and electronic equipment
CN115348390A (en) Shooting method and shooting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230516