CN113160386A - Image obtaining method, device, equipment and computer readable storage medium - Google Patents

Image obtaining method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN113160386A
CN113160386A CN202110376075.XA CN202110376075A CN113160386A CN 113160386 A CN113160386 A CN 113160386A CN 202110376075 A CN202110376075 A CN 202110376075A CN 113160386 A CN113160386 A CN 113160386A
Authority
CN
China
Prior art keywords
image
obtaining
preset
target
underwater
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110376075.XA
Other languages
Chinese (zh)
Inventor
毕胜
丁亚慧
李胜全
付先平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Peng Cheng Laboratory
Original Assignee
Dalian Maritime University
Peng Cheng Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University, Peng Cheng Laboratory filed Critical Dalian Maritime University
Priority to CN202110376075.XA priority Critical patent/CN113160386A/en
Publication of CN113160386A publication Critical patent/CN113160386A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image obtaining method, which comprises the following steps: acquiring an original atmospheric image of a target area; processing the original atmospheric image by using a three-dimensional modeling technology to obtain a three-dimensional model of the target area; obtaining an initial image based on preset distance information, preset angle information and the three-dimensional model; obtaining a transmittance based on the preset distance information and a preset attenuation coefficient; obtaining a target underwater image of the target area based on the transmittance and the initial image. The invention also discloses an image acquisition device, terminal equipment and a computer readable storage medium. By using the image obtaining method, the technical effect of improving the effectiveness of the target underwater image is achieved.

Description

Image obtaining method, device, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image obtaining method, an image obtaining apparatus, an image obtaining device, and a computer-readable storage medium.
Background
The underwater image is an image of an underwater environment shot by an imaging device, and mainly comprises light reflected to the imaging device by an object in water, light entering the imaging device by scattering at a small angle due to the underwater environment, and background light caused by impurities such as suspended matters in water.
At present, the deep learning technology is increasingly applied in the field of underwater image analysis, because an underwater image analysis method based on the deep learning technology needs a large number of underwater images as training data, and meanwhile, compared with the method for acquiring images on land, the difficulty of acquiring the underwater images by an imaging device is higher, so that the difficulty of acquiring a large number of underwater images is higher.
In the related art, an image obtaining method is provided, which acquires an actual underwater image by using an imaging device, and performs sample expansion on the actually acquired underwater image to obtain a large number of underwater images which can be used as training samples. Typically, the actual captured underwater image is rotated, geometrically transformed, scaled, blurred, noisy or distorted to obtain a new underwater image as a training image.
However, with the existing image acquisition method, the effectiveness of the obtained extended training image which can be used as a training sample is poor.
Disclosure of Invention
The invention mainly aims to provide an image obtaining method, an image obtaining device, image obtaining equipment and a computer readable storage medium, and aims to solve the technical problem that the effectiveness of an extended training image which can be used as a training sample is poor by adopting the existing image obtaining method in the prior art.
In order to achieve the above object, the present invention provides an image obtaining method, including the steps of:
acquiring an original atmospheric image of a target area;
processing the original atmospheric image by using a three-dimensional modeling technology to obtain a three-dimensional model of the target area;
obtaining an initial image based on preset distance information, preset angle information and the three-dimensional model;
obtaining a transmittance based on the preset distance information and a preset attenuation coefficient;
obtaining a target underwater image of the target area based on the transmittance and the initial image.
Optionally, before the step of obtaining the underwater image of the target in the target area based on the transmittance and the initial image, the method further includes:
acquiring depth information of pixel points in the initial image;
the step of obtaining an underwater image of the target based on the transmittance and the initial image comprises:
obtaining the target underwater image based on the transmittance, the initial image and the depth information.
Optionally, before the step of obtaining the underwater image of the target based on the transmittance, the initial image and the depth information, the method further includes:
acquiring a preset motion blur operator;
the step of obtaining the target underwater image based on the transmittance, the initial image and the depth information comprises:
and obtaining the target underwater image based on the transmissivity, the initial image, the depth information and the preset motion blur operator.
Optionally, before the step of obtaining the underwater target image based on the transmittance, the initial image, the depth information, and the preset motion blur operator, the method further includes:
acquiring a preset real underwater image set;
acquiring an average pixel value of a selected area in each real underwater image in the preset real underwater image set;
obtaining a result pixel value corresponding to the preset real underwater image set based on the average pixel value of the selected area in each real underwater image;
normalizing the result pixel value to obtain a background light coefficient;
the step of obtaining the target underwater image based on the transmittance, the initial image, the depth information and the preset motion blur operator comprises:
and obtaining the target underwater image based on the transmissivity, the initial image, the depth information, the preset motion blurring operator and the background light coefficient.
Optionally, the step of obtaining an average pixel value of a selected region in each real underwater image in the preset real underwater image set includes:
dividing each real underwater image by utilizing a hierarchical search technology of quadtree subdivision to obtain four rectangular areas corresponding to each real underwater image;
acquiring the standard deviation and the average pixel value of the pixel values of the four rectangular areas;
determining a selected rectangular area with the largest difference between the standard deviation of the pixel values and the average pixel value in the four rectangular areas;
dividing the selected rectangular area by utilizing a hierarchical search technology of quadtree subdivision to update the four rectangular areas, and returning to the step of obtaining the standard deviation and the average pixel value of the pixel values of the four rectangular areas until the size of the selected rectangular area meets a preset condition, and determining the selected rectangular area meeting the preset condition as the selected area;
an average pixel value for the selected region is calculated.
Optionally, the step of obtaining the transmittance based on the preset distance information and a preset attenuation coefficient includes:
obtaining the transmissivity by using a formula I based on the preset distance information and a preset attenuation coefficient;
the first formula is as follows:
tc(x)=exp(-βc·dph(x))
wherein, tc(x) Is the transmittance, beta, corresponding to the x-th pixel of the initial imagecAnd dph (x) is preset distance information corresponding to the xth pixel in the preset distance information, c is a color channel, and c belongs to { r, g, b }.
Optionally, the step of obtaining the target underwater image based on the transmittance, the initial image, the depth information, the preset motion blur operator, and the background light coefficient includes:
obtaining the target underwater image by using a formula II based on the transmissivity, the initial image, the depth information, the preset motion blurring operator and the background light coefficient;
the second formula is:
Figure BDA0003010154370000031
wherein, Ic(x) The pixel value, J, corresponding to the x-th pixel in the underwater image of the targetcThe pixel value corresponding to the x-th pixel in the initial image, D (x) the depth information corresponding to the x-th pixel in the depth information, F the motion blur operator, a convolution operation, AcIs the background light coefficient.
Further, to achieve the above object, the present invention also proposes an image obtaining apparatus comprising:
the acquisition module is used for acquiring an original atmospheric image of a target area;
the modeling module is used for processing the original atmospheric image by utilizing a three-dimensional modeling technology to obtain a three-dimensional model of the target area;
the first obtaining module is used for obtaining an initial image based on preset distance information, preset angle information and the three-dimensional model;
a second obtaining module, configured to obtain a transmittance based on the preset distance information and a preset attenuation coefficient;
and the third obtaining module is used for obtaining a target underwater image of the target area based on the transmissivity and the initial image.
In addition, to achieve the above object, the present invention further provides a terminal device, including: a memory, a processor and an image acquisition program stored on the memory and running on the processor, the image acquisition program when executed by the processor implementing the steps of the image acquisition method as claimed in any one of the above.
Furthermore, to achieve the above object, the present invention also proposes a computer-readable storage medium having stored thereon an image acquisition program which, when executed by a processor, implements the steps of the image acquisition method as described in any one of the above.
The technical scheme of the invention provides an image obtaining method, which comprises the steps of obtaining an original atmospheric image of a target area; processing the original atmospheric image by using a three-dimensional modeling technology to obtain a three-dimensional model of the target area; obtaining an initial image based on preset distance information, preset angle information and the three-dimensional model; obtaining a transmittance based on the preset distance information and a preset attenuation coefficient; obtaining a target underwater image of the target area based on the transmittance and the initial image.
The existing image obtaining method carries out operations such as rotation, geometric transformation, scaling, blurring, noise adding or distortion on an underwater image which is actually collected to obtain a training image which can be used as a training sample, the obtained training image does not consider the influence of the light transmittance of the underwater environment, so that the training image cannot accurately reflect the real information of the underwater area corresponding to the training image, and the effectiveness of the training image is poor. Therefore, the image obtaining method achieves the technical effect of improving the effectiveness of the target underwater image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal device in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a first embodiment of an image obtaining method according to the present invention;
FIG. 3 is a schematic diagram of an original underwater image of a target and an underwater image of the target with depth information added;
FIG. 4 is a schematic diagram of an original underwater image of a target and an underwater image of the target subjected to motion blur processing;
FIG. 5 is a schematic diagram of an original underwater image of a target and an underwater image of the target with a changed background light coefficient;
FIG. 6 is a schematic view of an actual underwater image and an underwater image of a target of the present invention;
fig. 7 is a block diagram showing the configuration of the first embodiment of the image pickup apparatus according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a terminal device in a hardware operating environment according to an embodiment of the present invention.
The terminal device may be a User Equipment (UE) such as a Mobile phone, a smart phone, a laptop, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a handheld device, a vehicle mounted device, a wearable device, a computing device or other processing device connected to a wireless modem, a Mobile Station (MS), etc. The terminal device may be referred to as a user terminal, a portable terminal, a desktop terminal, etc.
In general, a terminal device includes: at least one processor 301, a memory 302 and an image acquisition program stored on said memory and executable on said processor, said image acquisition program being configured to implement the steps of the image acquisition method as described previously.
The processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 301 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. The processor 301 may further include an AI (Artificial Intelligence) processor for processing operations related to the image acquisition method so that the image acquisition method model can be trained and learned autonomously, improving efficiency and accuracy.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 301 to implement the image acquisition methods provided by method embodiments herein.
In some embodiments, the terminal may further include: a communication interface 303 and at least one peripheral device. The processor 301, the memory 302 and the communication interface 303 may be connected by a bus or signal lines. Various peripheral devices may be connected to communication interface 303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, a display screen 305, and a power source 306.
The communication interface 303 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 301 and the memory 302. In some embodiments, processor 301, memory 302, and communication interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 301, the memory 302 and the communication interface 303 may be implemented on a single chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 304 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 304 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 304 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 305 is a touch display screen, the display screen 305 also has the ability to capture touch signals on or over the surface of the display screen 305. The touch signal may be input to the processor 301 as a control signal for processing. At this point, the display screen 305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 305 may be one, the front panel of the electronic device; in other embodiments, the display screens 305 may be at least two, respectively disposed on different surfaces of the electronic device or in a folded design; in still other embodiments, the display screen 305 may be a flexible display screen disposed on a curved surface or a folded surface of the electronic device. Even further, the display screen 305 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 305 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The power supply 306 is used to power various components in the electronic device. The power source 306 may be alternating current, direct current, disposable or rechargeable. When the power source 306 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the terminal device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
Furthermore, an embodiment of the present invention also provides a computer-readable storage medium, on which an image obtaining program is stored, which when executed by a processor implements the steps of the image obtaining method as described above. Therefore, a detailed description thereof will be omitted. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in embodiments of the computer-readable storage medium referred to in the present application, reference is made to the description of embodiments of the method of the present application. It is determined that the program instructions may be deployed to be executed on one terminal device, or on multiple terminal devices located at one site, or distributed across multiple sites and interconnected by a communication network, as examples.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The computer-readable storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Based on the above hardware structure, an embodiment of the image obtaining method of the present invention is proposed.
Referring to fig. 2, fig. 2 is a schematic flowchart of a first embodiment of an image obtaining method according to the present invention, where the method is used for a terminal device, and the method includes the following steps:
step S11: an original atmospheric image of a target area is acquired.
It should be noted that the execution subject of the present invention is a terminal device, the terminal device is equipped with an image acquisition method, and the image acquisition method of the present invention is implemented when the terminal device executes an image acquisition program. The original atmosphere image can be obtained by shooting the target area by a shooting module of the terminal device, and the original atmosphere image can also be obtained by shooting the target area by the shooting device and obtaining the original atmosphere image of the target area from the shooting device by the terminal device. The target area may be any type of land-based area, and the present invention is not limited thereto, and preferably the target area needs to include a target object (for use as a reference object). The original atmospheric image may refer to an image obtained by directly photographing a target area on the land, and the original atmospheric image does not include any influence of underwater environmental factors.
In addition, the invention aims to process the original atmospheric image to obtain a simulated target underwater image, wherein the target underwater image is the simulated processing of the underwater environment of the target area, and the obtained underwater simulated image is not a real underwater image.
The existing image obtaining method performs operations such as rotation, geometric transformation, scaling, blurring, noise adding or distortion on the acquired real underwater image to form a new sample image (which can be used as an underwater image of a training sample). However, the sample image is a two-dimensional sample image obtained by converting a real underwater two-dimensional image through the above-mentioned various operations, and the change of the underwater environment, light and object itself cannot be simulated in the conversion process, so that the effectiveness of the obtained two-dimensional sample image is poor.
By using the image obtaining method, the corresponding simulated target underwater image can be obtained by using the original atmospheric image, the real underwater image is obtained without shooting an underwater area, the difficulty of obtaining the underwater image is reduced, and meanwhile, compared with a sample image obtained by using the conventional image obtaining method, the target underwater image obtained by using the image obtaining method has better effectiveness.
Step S12: and processing the original atmospheric image by utilizing a three-dimensional modeling technology to obtain a three-dimensional model of the target area.
Step S13: and obtaining an initial image based on preset distance information, preset angle information and the three-dimensional model.
It should be noted that, by using a three-dimensional modeling technique, the original atmospheric image is processed to obtain a three-dimensional model of the target area, and an initial image of the three-dimensional model is obtained based on preset distance information (a distance between the camera and the target area, which is usually a distance from a reference object of the target area) and preset angle information (an angle between the camera and the target area, which is usually an angle from a reference object of the target area) set by a user, that is, the initial image can be obtained assuming that the camera shoots the three-dimensional model with the preset angle information and the preset distance information.
It can be understood that, for a three-dimensional model corresponding to the same target region, different initial images corresponding to the three-dimensional model can be obtained according to different preset distance information and preset angle information; the user can set different preset angle information and preset distance information for the same three-dimensional model respectively to obtain a plurality of initial images, and a plurality of final target underwater images are obtained based on the plurality of initial images.
In addition, based on preset distance information and preset angle information set by a user, the preset distance information and the preset angle information corresponding to each pixel point in the initial image can be obtained, and the preset distance information and the preset angle information corresponding to different pixel points in the initial image may be different.
The three-dimensional modeling technique may be any type of three-dimensional modeling technique that is currently available, and the present invention is not limited thereto.
It can be understood that all objects in the initial image are at a distance and an angle from a camera (a camera of the shooting device or a camera of the terminal device), and the initial image needs to be obtained based on the three-dimensional model, external parameters of the camera, preset distance information and preset angle information; in the initial image, the preset distance information and the preset angle information both include preset distance information and preset angle information corresponding to each pixel point in the initial image, the preset distance information of one pixel point is the distance between an object point and a camera in a target area corresponding to the pixel point, and the preset angle information of one pixel point is the angle between the object point and the camera in the target area corresponding to the pixel point.
The obtained initial image is a 2.5-dimensional image with preset distance information, namely the initial image comprises the preset distance information, and the preset distance information corresponding to each pixel point of the initial image is stored in the initial image in an additional information mode.
For example, the initial image of the target region includes 512 × 512 pixels, where the 124 th row and 35 th column pixels are a124,35,A124,35The corresponding object is the point a1 of the object A in the target area, then A124,35The preset distance information is the distance between the point a1 and the camera, A124,35The preset angle information is the angle between the point a1 and the camera.
Step S14: and obtaining the transmissivity based on the preset distance information and the preset attenuation coefficient.
Specifically, step S14 includes: obtaining the transmissivity by using a formula I based on the preset distance information and a preset attenuation coefficient;
the first formula is as follows:
tc(x)=exp(-βc·dph(x))
wherein, tc(x) Is the transmittance, beta, corresponding to the x-th pixel of the initial imagecAnd dph (x) is preset distance information corresponding to the xth pixel in the preset distance information, c is a color channel, and c belongs to { r, g, b }.
The transmittance includes the transmittance of all the pixels in the initial image, and meanwhile, each pixel in the initial image is a three-channel pixel, that is, each pixel in the initial image includes a pixel value of red, green and blue channels. In the present invention, the preset attenuation coefficients include ten water types of preset attenuation coefficients, which are respectively named as I, IA, IB, II, III, 1C, 3C, 5C, 7C and 9C, where I, IA, IB, II and III represent the preset attenuation coefficients of deep sea waters from clear to slightly turbid, and 1C to 9C represent the type preset attenuation coefficients of coastal waters gradually turbid. Referring to table 1, table 1 is a table of preset attenuation coefficients for the ten water types described above, table 1 being as follows:
TABLE 1
Type I IA IB II III 1C 3C 5C 7C 9C
R 0.341 0.342 0.349 0.375 0.426 0.439 0.498 0.564 0.635 0.755
G 0.049 0.0503 0.0572 0.078 0.121 0.121 0.198 0.314 0.494 0.777
B 0.021 0.0253 0.0325 0.110 0.139 0.240 0.400 0.650 0.693 1.24
As shown in table 1, the preset attenuation coefficients of the same water type include R, G and B, that is, the preset attenuation coefficients of the water type each include the preset attenuation coefficients corresponding to red light, green light and blue light. It can be understood that the three colors of red, green and blue related to each pixel point all need to be processed by using the preset attenuation coefficient of the light of the corresponding color to obtain the transmittance corresponding to the pixel point. In table 1, the preset attenuation coefficient of red light represents the preset attenuation coefficient corresponding to the wavelength of 650nm, the preset attenuation coefficient of green light represents the preset attenuation coefficient corresponding to the wavelength of 525nm, and the preset attenuation coefficient of blue light represents the preset attenuation coefficient corresponding to the wavelength of 475 nm.
Step S15: obtaining a target underwater image of the target area based on the transmittance and the initial image.
Further, before step S15, the method further includes: acquiring depth information of pixel points in the initial image; accordingly, step S15 includes: obtaining the target underwater image based on the transmittance, the initial image and the depth information.
It should be noted that each pixel point in the initial image has depth information, and the depth information may be set by a user according to a requirement. Generally, a user sets a preset depth, the terminal device obtains depth information of each pixel point in the initial image based on the preset depth, and the set preset depth can be the depth information of a certain reference pixel point in the initial image, and the depth information of other pixel points is obtained through the depth information of the reference pixel point; the preset depth can also be the depth of a reference object point of the target area, the depth information of a selected pixel point corresponding to the reference object point is obtained based on the depth of the reference object point, and the depth information of other pixel points is obtained based on the depth information of the selected pixel point. At this time, the effectiveness of the target underwater image is improved in consideration of the transmittance of the obtained target underwater image.
Further, before step S15, the method further includes: acquiring a preset motion blur operator; accordingly, step S15 includes: and obtaining the target underwater image based on the transmissivity, the initial image, the depth information and the preset motion blur operator.
It should be noted that, in a real underwater environment, a target area may be influenced by water flow, so that a camera and the target area generate relative motion, and motion blur is caused, at this time, the influence caused by motion blur needs to be considered, that is, at this time, the target underwater image needs to be obtained based on the transmittance, the initial image, the depth information, and the preset motion blur operator. Typically, the preset motion blur operator uses a directly attenuated radiation scene convolution motion blur operator. The motion blur factor is considered in the target underwater image obtained at the moment, and the effectiveness of the target underwater image is further improved.
Further, before step S15, the method further includes: acquiring a preset real underwater image set; acquiring an average pixel value of a selected area in each real underwater image in the preset real underwater image set; obtaining a result pixel value corresponding to the preset real underwater image set based on the average pixel value of the selected area in each real underwater image; normalizing the result pixel value to obtain a background light coefficient; accordingly, step S15 includes: and obtaining the target underwater image based on the transmissivity, the initial image, the depth information, the preset motion blurring operator and the background light coefficient.
Specifically, the step of obtaining an average pixel value of a selected region in each real underwater image in the preset real underwater image set includes: dividing each real underwater image by utilizing a hierarchical search technology of quadtree subdivision to obtain four rectangular areas corresponding to each real underwater image; acquiring the standard deviation and the average pixel value of the pixel values of the four rectangular areas; determining a selected rectangular area with the largest difference between the standard deviation of the pixel values and the average pixel value in the four rectangular areas; dividing the selected rectangular area by utilizing a hierarchical search technology of quadtree subdivision to update the four rectangular areas, and returning to the step of obtaining the standard deviation and the average pixel value of the pixel values of the four rectangular areas until the size of the selected rectangular area meets a preset condition, and determining the selected rectangular area meeting the preset condition as the selected area; an average pixel value for the selected region is calculated.
It should be noted that, when the average pixel value is calculated for each real underwater image, a new selected rectangular region is obtained after the selected rectangular region is divided by using a hierarchical search technique of quadtree subdivision each time, and when the new selected rectangular region meets a preset condition, the dividing step is not performed, and the selected rectangular region meeting the preset condition is determined as the selected region. Wherein the preset condition may be that the size ratio of the new selected rectangular area to the original real underwater image is not greater than 1/8 or 1/16; generally, when the new selected rectangular area is not greater than 1/8 or 1/16, the difference in the obtained backlight coefficients is small, so that no further division is required.
In addition, the normalization process for the result pixel value may be to normalize the result pixel value to 0-1 to obtain a background light coefficient. Meanwhile, the backlight coefficient includes a red backlight coefficient, a green backlight coefficient and a blue backlight coefficient, that is, the result pixel value includes a red result pixel value, a green result pixel value and a blue result pixel value, and other pixel values (an average pixel value, a standard deviation of pixel values, and the like) involved in the calculation process of the result pixel value also include a red pixel value, a green pixel value and a blue pixel value.
In the specific application, a plurality of preset real underwater image sets corresponding to a plurality of underwater environments are selected, one preset real underwater image set is used for obtaining a background light coefficient corresponding to one underwater environment, the plurality of underwater environments correspond to a plurality of background light coefficients, and generally, the plurality of underwater environments are selected to be representative) underwater environments, and generally, the plurality of underwater environments are not less than four. In another embodiment, the user can set the backlight coefficient by himself according to the requirement, and the ratio of R is smaller than G and B when the backlight coefficient is set by himself.
Specifically, step S15 includes: obtaining the target underwater image by using a formula II based on the transmissivity, the initial image, the depth information, the preset motion blurring operator and the background light coefficient;
the second formula is:
Figure BDA0003010154370000121
wherein, Ic(x) The pixel value, J, corresponding to the x-th pixel in the underwater image of the targetcThe pixel value corresponding to the x-th pixel in the initial image, D (x) the depth information corresponding to the x-th pixel in the depth information, F the motion blur operator, a convolution operation, AcIs the background light coefficient.
Wherein the content of the first and second substances,
Figure BDA0003010154370000131
the light which shows that the natural light is attenuated from the water surface to the water depth through the absorption and scattering action of the water, and the rest light is attenuated through the distance of the scene and the camera to reach the camera, namely the scene radiation is directly attenuated; a. thec·(1-tc(x) Representing a portion of the ambient light entering the camera due to scattering.
In specific application, the pixel value of each pixel point of the initial image needs to be calculated to obtain a new pixel point with a new pixel value corresponding to each pixel point, and an image formed by the new pixel points corresponding to all the pixel points related to the initial image is the target underwater image.
In addition, each pixel point in the initial image is a pixel point of three channels of red, green and blue, and the pixel values of the three channels of each pixel point all need to be subjected to the above operation processing by using the background light coefficient and the transmittance of the corresponding color, so as to obtain the pixel point with the new pixel value corresponding to the pixel point.
Referring to fig. 3-5, fig. 3 is a schematic diagram of an original target underwater image and a target underwater image added with depth information; FIG. 4 is a schematic diagram of an original underwater image of a target and an underwater image of the target subjected to motion blur processing; FIG. 5 is a schematic diagram of an original underwater image of a target and an underwater image of the target with a changed background light coefficient.
In fig. 3, the left side is an original target underwater image corresponding to the initial image, the original target underwater image is obtained based on the transmittance and the initial image, and the factors such as depth information, motion blur, background light coefficient and the like are not considered, and the right side is a target underwater image to which depth information is added corresponding to the initial image, where the depth information is 1m (1m is depth information of a reference pixel), the depth information is considered in the target underwater image, the factors such as motion blur, background light coefficient and the like are not considered, and the effectiveness of the target underwater image is high.
In fig. 4, the left side is an original target underwater image corresponding to the initial image, the original target underwater image is obtained based on the transmittance and the initial image without considering factors such as depth information, motion blur, and background light coefficient, and the right side is a target underwater image subjected to motion blur processing corresponding to the initial image, the target underwater image takes the motion blur into account without considering factors such as depth information and background light coefficient. The effectiveness of the target underwater image is high.
In fig. 5, the left side is the original target underwater image corresponding to the initial image, the original target underwater image is obtained based on the transmittance and the initial image without considering the factors such as depth information, motion blur, and background light coefficient, and the right side is the target underwater image corresponding to the initial image with the background light coefficient changed, the target underwater image takes the background light coefficient into account without considering the factors such as motion blur and depth information. The effectiveness of the target underwater image is high.
Referring to fig. 6, fig. 6 is a schematic view of an actual underwater image and a target underwater image of the present invention; the left 3 images are respectively real underwater images corresponding to different underwater environments, the right side is a target underwater image obtained after initial image simulation (processed by the image obtaining method of the invention), and the target underwater image considers factors such as depth information, background light coefficient, motion blur and the like.
The technical scheme of the invention provides an image obtaining method, which comprises the steps of obtaining an original atmospheric image of a target area; processing the original atmospheric image by using a three-dimensional modeling technology to obtain a three-dimensional model of the target area; obtaining an initial image based on preset distance information, preset angle information and the three-dimensional model; obtaining a transmittance based on the preset distance information and a preset attenuation coefficient; obtaining a target underwater image of the target area based on the transmittance and the initial image.
The existing image obtaining method carries out operations such as rotation, geometric transformation, scaling, blurring, noise adding or distortion on an underwater image which is actually collected to obtain a training image which can be used as a training sample, the obtained training image does not consider the influence of the light transmittance of the underwater environment, so that the training image cannot accurately reflect the real information of the underwater area corresponding to the training image, and the effectiveness of the training image is poor. Therefore, the image obtaining method achieves the technical effect of improving the effectiveness of the target underwater image.
Referring to fig. 7, fig. 7 is a block diagram showing a configuration of a first embodiment of an image obtaining apparatus according to the present invention, the apparatus being used for a terminal device, the apparatus including:
the acquisition module 10 is used for acquiring an original atmospheric image of a target area;
a modeling module 20, configured to process the original atmospheric image by using a three-dimensional modeling technique to obtain a three-dimensional model of the target region;
a first obtaining module 30, configured to obtain an initial image based on preset distance information, preset angle information, and the three-dimensional model;
a second obtaining module 40, configured to obtain a transmittance based on the preset distance information and a preset attenuation coefficient;
a third obtaining module 50, configured to obtain a target underwater image of the target area based on the transmittance and the initial image.
The above description is only an alternative embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An image acquisition method, characterized in that it comprises the steps of:
acquiring an original atmospheric image of a target area;
processing the original atmospheric image by using a three-dimensional modeling technology to obtain a three-dimensional model of the target area;
obtaining an initial image based on preset distance information, preset angle information and the three-dimensional model;
obtaining a transmittance based on the preset distance information and a preset attenuation coefficient;
obtaining a target underwater image of the target area based on the transmittance and the initial image.
2. The method of claim 1, wherein the step of obtaining an underwater image of the target area based on the transmittance and the initial image is preceded by the method further comprising:
acquiring depth information of pixel points in the initial image;
the step of obtaining an underwater image of the target based on the transmittance and the initial image comprises:
obtaining the target underwater image based on the transmittance, the initial image and the depth information.
3. The method of claim 2, wherein prior to the step of obtaining the underwater image of the target based on the transmissivity, the initial image, and the depth information, the method further comprises:
acquiring a preset motion blur operator;
the step of obtaining the target underwater image based on the transmittance, the initial image and the depth information comprises:
and obtaining the target underwater image based on the transmissivity, the initial image, the depth information and the preset motion blur operator.
4. The method of claim 3, wherein said step of obtaining said target underwater image based on said transmittance, said initial image, said depth information and said preset motion blur operator is preceded by the step of:
acquiring a preset real underwater image set;
acquiring an average pixel value of a selected area in each real underwater image in the preset real underwater image set;
obtaining a result pixel value corresponding to the preset real underwater image set based on the average pixel value of the selected area in each real underwater image;
normalizing the result pixel value to obtain a background light coefficient;
the step of obtaining the target underwater image based on the transmittance, the initial image, the depth information and the preset motion blur operator comprises:
and obtaining the target underwater image based on the transmissivity, the initial image, the depth information, the preset motion blurring operator and the background light coefficient.
5. The method of claim 4, wherein the step of obtaining an average pixel value for a selected region in each real underwater image in the set of preset real underwater images comprises:
dividing each real underwater image by utilizing a hierarchical search technology of quadtree subdivision to obtain four rectangular areas corresponding to each real underwater image;
acquiring the standard deviation and the average pixel value of the pixel values of the four rectangular areas;
determining a selected rectangular area with the largest difference between the standard deviation of the pixel values and the average pixel value in the four rectangular areas;
dividing the selected rectangular area by utilizing a hierarchical search technology of quadtree subdivision to update the four rectangular areas, and returning to the step of obtaining the standard deviation and the average pixel value of the pixel values of the four rectangular areas until the size of the selected rectangular area meets a preset condition, and determining the selected rectangular area meeting the preset condition as the selected area;
an average pixel value for the selected region is calculated.
6. The method of claim 5, wherein the obtaining of the transmittance based on the preset distance information and a preset attenuation coefficient comprises:
obtaining the transmissivity by using a formula I based on the preset distance information and a preset attenuation coefficient;
the first formula is as follows:
tc(x)=exp(-βc·dph(x))
wherein, tc(x) Is the transmittance, beta, corresponding to the x-th pixel of the initial imagecAnd dph (x) is preset distance information corresponding to the xth pixel in the preset distance information, c is a color channel, and c belongs to { r, g, b }.
7. The method of claim 6, wherein the step of obtaining the target underwater image based on the transmittance, the initial image, the depth information, the preset motion blur operator, and the backlight coefficient comprises:
obtaining the target underwater image by using a formula II based on the transmissivity, the initial image, the depth information, the preset motion blurring operator and the background light coefficient;
the second formula is:
Figure FDA0003010154360000031
wherein, Ic(x) The pixel value, J, corresponding to the x-th pixel in the underwater image of the targetcThe pixel value corresponding to the x-th pixel in the initial image, D (x) the depth information corresponding to the x-th pixel in the depth information, F the motion blur operator, a convolution operation, AcIs the background light coefficient.
8. An image obtaining apparatus, characterized in that the apparatus comprises the steps of:
the acquisition module is used for acquiring an original atmospheric image of a target area;
the modeling module is used for processing the original atmospheric image by utilizing a three-dimensional modeling technology to obtain a three-dimensional model of the target area;
the first obtaining module is used for obtaining an initial image based on preset distance information, preset angle information and the three-dimensional model;
a second obtaining module, configured to obtain a transmittance based on the preset distance information and a preset attenuation coefficient;
and the third obtaining module is used for obtaining a target underwater image of the target area based on the transmissivity and the initial image.
9. A terminal device, characterized in that the terminal device comprises: memory, a processor and an image acquisition program stored on the memory and running on the processor, the image acquisition program when executed by the processor implementing the steps of the image acquisition method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that an image acquisition program is stored thereon, which when executed by a processor implements the steps of the image acquisition method according to any one of claims 1 to 7.
CN202110376075.XA 2021-04-07 2021-04-07 Image obtaining method, device, equipment and computer readable storage medium Pending CN113160386A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110376075.XA CN113160386A (en) 2021-04-07 2021-04-07 Image obtaining method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110376075.XA CN113160386A (en) 2021-04-07 2021-04-07 Image obtaining method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113160386A true CN113160386A (en) 2021-07-23

Family

ID=76889016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110376075.XA Pending CN113160386A (en) 2021-04-07 2021-04-07 Image obtaining method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113160386A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115456892A (en) * 2022-08-31 2022-12-09 北京四维远见信息技术有限公司 Method, apparatus, device and medium for automatic geometric correction of 2.5-dimensional visual image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6115511A (en) * 1996-10-28 2000-09-05 Director General Of The 1Th District Port Construction Bureau, Ministry Of Transport Underwater laser imaging apparatus
JP2010271845A (en) * 2009-05-20 2010-12-02 Mitsubishi Electric Corp Image reading support device
US20180286066A1 (en) * 2015-09-18 2018-10-04 The Regents Of The University Of California Cameras and depth estimation of images acquired in a distorting medium
CN110070480A (en) * 2019-02-26 2019-07-30 青岛大学 A kind of analogy method of underwater optics image
CN112488955A (en) * 2020-12-08 2021-03-12 大连海事大学 Underwater image restoration method based on wavelength compensation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6115511A (en) * 1996-10-28 2000-09-05 Director General Of The 1Th District Port Construction Bureau, Ministry Of Transport Underwater laser imaging apparatus
JP2010271845A (en) * 2009-05-20 2010-12-02 Mitsubishi Electric Corp Image reading support device
US20180286066A1 (en) * 2015-09-18 2018-10-04 The Regents Of The University Of California Cameras and depth estimation of images acquired in a distorting medium
CN110070480A (en) * 2019-02-26 2019-07-30 青岛大学 A kind of analogy method of underwater optics image
CN112488955A (en) * 2020-12-08 2021-03-12 大连海事大学 Underwater image restoration method based on wavelength compensation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115456892A (en) * 2022-08-31 2022-12-09 北京四维远见信息技术有限公司 Method, apparatus, device and medium for automatic geometric correction of 2.5-dimensional visual image

Similar Documents

Publication Publication Date Title
CN107172364B (en) Image exposure compensation method and device and computer readable storage medium
CN107945163B (en) Image enhancement method and device
CN110706179B (en) Image processing method and electronic equipment
CN105744159A (en) Image synthesizing method and device
WO2021036715A1 (en) Image-text fusion method and apparatus, and electronic device
CN110930329B (en) Star image processing method and device
EP4249869A1 (en) Temperature measuring method and apparatus, device and system
CN113676716B (en) White balance control method, device, terminal equipment and storage medium
CN112767281B (en) Image ghost eliminating method and device, electronic equipment and storage medium
CN110991457B (en) Two-dimensional code processing method and device, electronic equipment and storage medium
CN112733688B (en) House attribute value prediction method and device, terminal device and computer readable storage medium
CN113160386A (en) Image obtaining method, device, equipment and computer readable storage medium
CN110766610A (en) Super-resolution image reconstruction method and electronic equipment
CN113014830A (en) Video blurring method, device, equipment and storage medium
CN116704200A (en) Image feature extraction and image noise reduction method and related device
CN113096022A (en) Image blurring processing method and device, storage medium and electronic equipment
CN114298895B (en) Image realism style migration method, device, equipment and storage medium
CN115330610A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN114332118A (en) Image processing method, device, equipment and storage medium
CN112446846A (en) Fusion frame obtaining method, device, SLAM system and storage medium
EP3686836A1 (en) Image processing apparatus, image processing method, and computer-readable recording medium
CN112183217A (en) Gesture recognition method, interaction method based on gesture recognition and mixed reality glasses
CN112489093A (en) Sonar image registration method, sonar image registration device, terminal equipment and storage medium
CN115358937B (en) Image anti-reflection method, medium and electronic equipment
CN115861042B (en) Image processing method, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination