CN116129245A - Image deconvolution method and device, equipment and medium - Google Patents

Image deconvolution method and device, equipment and medium Download PDF

Info

Publication number
CN116129245A
CN116129245A CN202310143171.9A CN202310143171A CN116129245A CN 116129245 A CN116129245 A CN 116129245A CN 202310143171 A CN202310143171 A CN 202310143171A CN 116129245 A CN116129245 A CN 116129245A
Authority
CN
China
Prior art keywords
product
result
target
deconvolution
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310143171.9A
Other languages
Chinese (zh)
Inventor
郑临风
施佳鑫
王京
李慧敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunlun Core Beijing Technology Co ltd
Original Assignee
Kunlun Core Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunlun Core Beijing Technology Co ltd filed Critical Kunlun Core Beijing Technology Co ltd
Priority to CN202310143171.9A priority Critical patent/CN116129245A/en
Publication of CN116129245A publication Critical patent/CN116129245A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides an image deconvolution method, an image deconvolution device, image deconvolution equipment and an image deconvolution medium, relates to the technical field of chips, and particularly relates to the technical field of artificial intelligence and image processing. The implementation scheme is as follows: acquiring a feature map containing a plurality of pixel elements, a deconvolution kernel matrix containing a plurality of deconvolution kernel elements and deconvolution parameters; calculating, for each deconvolution kernel element of the plurality of deconvolution kernel elements, a product of the deconvolution kernel element and each pixel element of the plurality of pixel elements to obtain a plurality of product results; for each product result in the plurality of product results, determining a target position of the product result in a target result graph based on a position of a pixel element corresponding to the product result in the feature graph, a position of a deconvolution kernel element corresponding to the product result in the deconvolution kernel matrix, and the deconvolution parameter; and determining the target result graph based on the respective target positions of the plurality of product results.

Description

Image deconvolution method and device, equipment and medium
Technical Field
The present disclosure relates to the field of chip technology, and in particular, to the field of artificial intelligence and image processing, and more particularly, to an image deconvolution method, apparatus, electronic device, computer readable storage medium, and computer program product.
Background
Artificial intelligence is the discipline of studying the process of making a computer mimic certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person, both hardware-level and software-level techniques. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
Convolutional neural networks have a wide variety of existence and applications in the field of deep learning-based image processing technology. It is a type of feedforward neural network that contains convolution calculations and has a deep structure. On this basis, optimization of the image processing technique needs to be achieved using deconvolution calculations or convolution gradient calculations.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides an image deconvolution method, apparatus, electronic device, computer readable storage medium, and computer program product.
According to an aspect of the present disclosure, there is provided an image deconvolution method including: acquiring a feature map containing a plurality of pixel elements, a deconvolution kernel matrix containing a plurality of deconvolution kernel elements and deconvolution parameters; calculating, for each deconvolution kernel element of the plurality of deconvolution kernel elements, a product of the deconvolution kernel element and each pixel element of the plurality of pixel elements to obtain a plurality of product results; for each product result in the plurality of product results, determining a target position of the product result in a target result graph based on a position of a pixel element corresponding to the product result in the feature graph, a position of a deconvolution kernel element corresponding to the product result in the deconvolution kernel matrix, and the deconvolution parameter; and determining the target result graph based on the respective target positions of the plurality of product results.
According to another aspect of the present disclosure, there is provided an image deconvolution apparatus including: an acquisition unit configured to acquire a feature map including a plurality of pixel elements, a deconvolution kernel matrix including a plurality of deconvolution kernel elements, and deconvolution parameters; a calculation unit configured to calculate, for each deconvolution kernel element of the plurality of deconvolution kernel elements, a product of the deconvolution kernel element and each pixel element of the plurality of pixel elements to obtain a plurality of product results; a first determining unit configured to determine, for each product result of the plurality of product results, a target position of the product result in a target result graph based on a position of a pixel element corresponding to the product result in the feature graph, a position of a deconvolution kernel element corresponding to the product result in the deconvolution kernel matrix, and the deconvolution parameter; and a second determining unit configured to determine the target result map based on respective target positions of the plurality of product results.
According to another aspect of the present disclosure, there is provided a chip comprising the image deconvolution apparatus as described above.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image deconvolution method described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the above-described image deconvolution method.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program, wherein the computer program, when executed by a processor, is capable of implementing the above-described image deconvolution method.
According to one or more embodiments of the present disclosure, image deconvolution efficiency may be improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, according to an exemplary embodiment of the present disclosure;
FIG. 2 illustrates a flowchart of an image deconvolution method, in accordance with an exemplary embodiment of the present disclosure;
3A-3B illustrate schematic diagrams of an image deconvolution process in accordance with an exemplary embodiment of the present disclosure;
FIG. 4 illustrates a block diagram of an image deconvolution apparatus according to an exemplary embodiment of the present disclosure;
fig. 5 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
In the related art, the feature map is multiplied by the transposed deconvolution kernel matrix to obtain an intermediate result matrix, and then the intermediate result matrix is rearranged by using col2im operation to obtain the target result map. This implementation is inefficient and the storage of the intermediate result matrix requires additional hardware resources.
Based on the above, the present disclosure provides an image deconvolution method, where each element in a feature map and each element in a deconvolution kernel matrix are multiplied to obtain a plurality of product results, a target position of each product result in a target result map is determined for each product result, and the plurality of product results are filled into the target result map in a scattered manner according to the corresponding positions, so that an image deconvolution result can be obtained more simply, conveniently and rapidly, and hardware resources are saved while efficiency is improved.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable execution of the image deconvolution method.
In some embodiments, server 120 may also provide other services or software applications, which may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The user may send at least one of the feature map, deconvolution core matrix, and deconvolution parameters using client devices 101, 102, 103, 104, 105, and/or 106. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that the present disclosure may support any number of client devices.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various classes and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, linux, or Linux-like operating systems (e.g., GOOGLE Chrome OS); or include various mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays (such as smart glasses) and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any of a variety of networks known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. For example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a blockchain network, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
In some implementations, the server 120 may be a server of a distributed system or a server that incorporates a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or intelligent cloud host with artificial intelligence technology. The cloud server is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and virtual private server (VPS, virtual Private Server) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 130 may be used to store information such as audio files and video files. Database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. Database 130 may be of different categories. In some embodiments, the database used by server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
Fig. 2 shows a flowchart of an image deconvolution method 200 according to an exemplary embodiment of the present disclosure. As shown in fig. 2, the method 200 includes:
step S201, obtaining a feature map containing a plurality of pixel elements, a deconvolution kernel matrix containing a plurality of deconvolution kernel elements and deconvolution parameters;
step S202, for each deconvolution kernel element in the deconvolution kernel elements, calculating the product of the deconvolution kernel element and each pixel element in the pixel elements to obtain a plurality of product results;
step S203, for each product result of the plurality of product results, determining a target position of the product result in a target result graph based on a position of a pixel element corresponding to the product result in the feature graph, a position of a deconvolution kernel element corresponding to the product result in the deconvolution kernel matrix, and the deconvolution parameter; and
step S204, determining the target result graph based on the respective target positions of the product results.
Therefore, each pixel in the feature map and each element in the deconvolution kernel matrix are multiplied to obtain a plurality of product results, and the target position of each product result in the target result map is determined according to each product result, so that the target result map can be obtained based on the respective target positions of the product results, and the image deconvolution efficiency is improved.
In some examples, the deconvolution parameters include a lateral deconvolution step size, a longitudinal deconvolution step size, a deconvolution kernel size, a feature map size, a target result map size, and directional fill information for the feature map, and the like.
In some examples, the feature map may also be a differential feature map obtained in the image processing process, so that feature map gradient information can be obtained through deconvolution calculation, thereby meeting the requirements of actual application scenes.
According to some embodiments, determining, for each product result of the plurality of product results in step S203, a target position of the product result in a target result graph based on a position of a pixel element corresponding to the product result in the feature graph, a position of a deconvolution kernel element corresponding to the product result in the deconvolution kernel matrix, and a deconvolution parameter comprises: for a third product result of the plurality of product results, in response to determining that a first pixel element corresponding to the third product result is located at a first reference position in the feature map, and in response to determining that a first deconvolution kernel element corresponding to the third product result is located at a second reference position in the deconvolution kernel matrix, determining that a target position of the third product result in a target result map is a preset reference position; for a fourth product result of the plurality of product results, responsive to determining that a second pixel element corresponding to the fourth product result is adjacent to the first pixel element, and responsive to determining that the fourth product result corresponds to the first deconvolution kernel element, determining a target position of the fourth product result in a target result graph based on the preset reference position, the relative positional relationship of the first pixel element and the second pixel element, and the deconvolution parameter.
Therefore, the reference position of the specific product result can be determined first, and then the target position of the product result corresponding to the adjacent pixel element can be determined in sequence based on the deconvolution parameter and the relative position relation between the adjacent pixel elements, so that the efficiency is further improved.
In some examples, the relative positional relationship of the first pixel element and the second pixel element may include at least one of: the first pixel element is to the left of the second pixel element, the first pixel element is to the right of the second pixel element, the first pixel element is above the second pixel element, and the first pixel element is below the second pixel element.
In some examples, after determining the reference position of the third product result and then determining the target position of the fourth product result adjacent to the third product result based on the reference position, the target position of more product results adjacent to the fourth product result may be further determined based on the position of the fourth product result, and so on to obtain the overall result. Therefore, from the reference position, the target positions of the product results corresponding to the adjacent pixel elements can be sequentially determined based on the deconvolution parameters and the relative position relation between the adjacent pixel elements, so that the target positions of all the product results can be accurately and efficiently obtained.
According to some embodiments, determining, for each product result of the plurality of product results in step S203, a target position of the product result in a target result graph based on a position of a pixel element corresponding to the product result in the feature graph, a position of a deconvolution kernel element corresponding to the product result in the deconvolution kernel matrix, and a deconvolution parameter further comprises: for a fifth product result of the plurality of product results, responsive to determining that the fifth product result corresponds to the first pixel element, and responsive to determining that a second deconvolution kernel element corresponding to the fifth product result is adjacent to the first deconvolution kernel element, determining a target position of the fifth product result in a target result graph based on the preset reference position, the relative positional relationship of the first deconvolution kernel element and the second deconvolution kernel element, and the deconvolution parameter. Therefore, the target positions of the product results corresponding to the adjacent deconvolution kernel elements can be sequentially determined based on the deconvolution parameters and the relative position relation between the adjacent deconvolution kernel elements, and the efficiency of determining the target positions of the product results is improved.
According to some embodiments, the determining the target position of the fourth product result in the target result map based on the preset reference position, the relative positional relationship of the first pixel element and the second pixel element, and the deconvolution parameter includes: determining the relative offset of the target position of the fourth product result in a target result diagram and the preset reference position based on the relative position relation between the first pixel element and the second pixel element and the deconvolution parameter; and determining a target position of the fourth product result in a target result graph based on the preset reference position and the relative offset. Thus, the offset of the product result corresponding to the adjacent pixel element in the continuous dimension (for example, may include the horizontal direction or the vertical direction) in the target result graph can be determined based on the deconvolution parameter, so that the target position of the product result can be determined simply and quickly by jumping according to the offset.
According to some embodiments, determining the target result map in step S204 based on the respective target positions of the plurality of product results includes: for each product result in the plurality of product results, determining a storage address of the product result in a target storage unit for storing the target result map based on a target position of the product result and a preset storage rule; and storing the plurality of product results to the target storage unit based on the respective corresponding storage addresses of the plurality of product results to obtain the target result graph. Therefore, the corresponding storage position can be determined based on the target position of each product result so as to store a plurality of product results into the target storage unit in a scattered manner, and therefore the filling of the target result graph can be achieved while the product results are written into the storage unit, and the efficiency is further improved.
In some examples, an initial storage address of the multiple product results in the target storage unit may be determined based on a preset storage rule, so as to ensure that the target result graph can be directly read from the target storage unit according to a reading rule corresponding to the preset storage rule.
According to some embodiments, determining the target result map in step S204 based on the respective target positions of the plurality of product results includes: responsive to determining that respective target positions of a first product result and a second product result of the plurality of product results are both first positions, calculating a sum of the first product result and the second product result; and determining that a first target result element located at the first position in the target result graph is the sum of the first product result and the second product result. When the multiple product results correspond to the same target position in the target result graph, accumulating and summing the multiple product results to obtain corresponding target result elements so as to obtain an accurate target result graph.
Fig. 3A-3B illustrate schematic diagrams of an image deconvolution process according to an exemplary embodiment of the present disclosure. Referring to fig. 3A, in this example, the size of the feature map is 3×3, the size of the deconvolution kernel matrix is 2×2, the transverse deconvolution step size and the longitudinal deconvolution step size are both 2, and the size of the target result map is 6×6, so that the product result of 36 feature image pixel elements and deconvolution kernel elements can be obtained.
In this example, each element in the deconvolution kernel matrix may be multiplied by the feature map in turn, so that the target position of each product result in the target result map may be determined based on the position of the corresponding pixel element in the feature map and the position of the corresponding deconvolution kernel element in the deconvolution kernel matrix and the deconvolution parameters, and the multiple product results may be scattered and filled into the target result map based on the target positions, so that the image deconvolution result may be accurately and efficiently obtained.
Referring to fig. 3A, it may be determined that the lateral offset of the product result obtained by multiplying adjacent pixel elements in each row in the feature map by the same deconvolution kernel element is 2 (for example, the lateral offset of a11×b11 and a12×b11 is 2), and the longitudinal offset of the product result obtained by multiplying adjacent pixel elements in each column by the same deconvolution kernel element is 2 (for example, the lateral offset of a11×b11 and a21×b11 is 2), so that 9 product results corresponding to each deconvolution kernel element in the 4 deconvolution kernel elements can be hashed and stored in the target result map according to the offset parameter, so as to obtain the first target result map shown in fig. 3B.
Therefore, the offset (including the lateral offset and the longitudinal offset) of the product result corresponding to the adjacent pixel elements in the continuous dimension (including the transverse direction or the longitudinal direction) in the target result graph can be determined based on the deconvolution parameter, so that the target position of the product result can be determined simply, conveniently and quickly by jumping according to the offset.
In some examples, the deconvolution operation may be performed on multiple feature maps simultaneously to obtain a corresponding multiple target result map, in which case, the multiple target result maps may be stored to different locations in the target storage unit according to a preset storage rule, for example, a batch offset of product results in the same location in adjacent target result maps may be determined based on deconvolution parameters, and a storage location of multiple product results in each target result map may be determined based on the batch offset. Referring to fig. 3B, when the target storage unit is a two-dimensional storage array, the batch offsets of the first target result map and the second target result map may be calculated according to the size of the first target result map, so as to achieve direct storage of the first target result map and the second target result map.
According to an aspect of the present disclosure, there is also provided an image deconvolution apparatus. Fig. 4 shows a block diagram of an image deconvolution apparatus 400 according to an exemplary embodiment of the present disclosure. As shown in fig. 4, the apparatus 400 includes:
An acquisition unit 401 configured to acquire a feature map including a plurality of pixel elements, a deconvolution kernel matrix including a plurality of deconvolution kernel elements, and deconvolution parameters;
a calculation unit 402 configured to calculate, for each deconvolution kernel element of the plurality of deconvolution kernel elements, a product of the deconvolution kernel element and each pixel element of the plurality of pixel elements to obtain a plurality of product results;
a first determining unit 403 configured to determine, for each product result of the plurality of product results, a target position of the product result in a target result graph based on a position of a pixel element corresponding to the product result in the feature graph, a position of a deconvolution kernel element corresponding to the product result in the deconvolution kernel matrix, and the deconvolution parameter; and
a second determining unit 404 is configured to determine the target result map based on respective target positions of the plurality of product results.
According to some embodiments, the second determining unit 404 comprises: a first determining subunit configured to determine, for each product result of the plurality of product results, a storage address of the product result in a target storage unit for storing the target result map, based on a target location of the product result and a preset storage rule; and a storage unit configured to store the plurality of product results to the target storage unit based on respective corresponding storage addresses of the plurality of product results, to obtain the target result map.
According to some embodiments, the second determining unit 404 comprises: a calculating subunit configured to calculate a sum of a first product result and a second product result of the plurality of product results in response to determining that respective target positions of the first product result and the second product result are both first positions; and a second determining subunit configured to determine a first target result element located at the first position in the target result graph as a sum of the first product result and the second product result.
According to some embodiments, the first determining unit 403 is configured to: for a third product result of the plurality of product results, in response to determining that a first pixel element corresponding to the third product result is located at a first reference position in the feature map, and in response to determining that a first deconvolution kernel element corresponding to the third product result is located at a second reference position in the deconvolution kernel matrix, determining that a target position of the third product result in a target result map is a preset reference position; for a fourth product result of the plurality of product results, responsive to determining that a second pixel element corresponding to the fourth product result is adjacent to the first pixel element, and responsive to determining that the fourth product result corresponds to the first deconvolution kernel element, determining a target position of the fourth product result in a target result graph based on the preset reference position, the relative positional relationship of the first pixel element and the second pixel element, and the deconvolution parameter.
According to some embodiments, the first determining unit 403 is further configured to: for a fifth product result of the plurality of product results, responsive to determining that the fifth product result corresponds to the first pixel element, and responsive to determining that a second deconvolution kernel element corresponding to the fifth product result is adjacent to the first deconvolution kernel element, determining a target position of the fifth product result in a target result graph based on the preset reference position, the relative positional relationship of the first deconvolution kernel element and the second deconvolution kernel element, and the deconvolution parameter.
According to some embodiments, the first determining unit 403 is configured to: determining the relative offset of the target position of the fourth product result in a target result diagram and the preset reference position based on the relative position relation between the first pixel element and the second pixel element and the deconvolution parameter; and determining a target position of the fourth product result in a target result graph based on the preset reference position and the relative offset.
According to an aspect of the present disclosure, there is also provided a chip including the image deconvolution apparatus 400 as described above.
According to another aspect of the present disclosure, there is also provided an electronic apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image deconvolution method described above.
According to another aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the above-described image deconvolution method.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the above-described image deconvolution method.
Referring to fig. 5, a block diagram of an electronic device 500 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 includes a computing unit 501 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The computing unit 501, ROM 502, and RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Various components in the device 500 are connected to the I/O interface 505, including: an input unit 506, an output unit 507, a storage unit 508, and a communication unit 509. The input unit 506 may be any type of device capable of inputting information to the device 500, the input unit 506 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 507 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 508 may include, but is not limited to, magnetic disks, optical disks. The communication unit 509 allows the device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 501 performs the various methods and processes described above, such as the image deconvolution method. For example, in some embodiments, the image deconvolution method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into RAM 503 and executed by the computing unit 501, one or more steps of the image deconvolution method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the image deconvolution method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (16)

1. An image deconvolution method, comprising:
acquiring a feature map containing a plurality of pixel elements, a deconvolution kernel matrix containing a plurality of deconvolution kernel elements and deconvolution parameters;
Calculating, for each deconvolution kernel element of the plurality of deconvolution kernel elements, a product of the deconvolution kernel element and each pixel element of the plurality of pixel elements to obtain a plurality of product results;
for each product result in the plurality of product results, determining a target position of the product result in a target result graph based on a position of a pixel element corresponding to the product result in the feature graph, a position of a deconvolution kernel element corresponding to the product result in the deconvolution kernel matrix, and the deconvolution parameter; and
and determining the target result graph based on the respective target positions of the product results.
2. The method of claim 1, wherein the determining the target result map based on respective target locations of the plurality of product results comprises:
for each product result in the plurality of product results, determining a storage address of the product result in a target storage unit for storing the target result map based on a target position of the product result and a preset storage rule; and
and storing the multiple product results to the target storage unit based on the corresponding storage addresses of the multiple product results, so as to obtain the target result graph.
3. The method of claim 1 or 2, wherein the determining the target result map based on respective target locations of the plurality of product results comprises:
responsive to determining that respective target positions of a first product result and a second product result of the plurality of product results are both first positions, calculating a sum of the first product result and the second product result; and
and determining that a first target result element positioned at the first position in the target result graph is the sum of the first product result and the second product result.
4. A method as claimed in any one of claims 1 to 3, wherein said determining, for each product result of said plurality of product results, a target position of the product result in a target result graph based on a position of a pixel element corresponding to the product result in said feature graph, a position of a deconvolution kernel element corresponding to the product result in said deconvolution kernel matrix, and a deconvolution parameter comprises:
for a third product result of the plurality of product results, in response to determining that a first pixel element corresponding to the third product result is located at a first reference position in the feature map, and in response to determining that a first deconvolution kernel element corresponding to the third product result is located at a second reference position in the deconvolution kernel matrix, determining that a target position of the third product result in a target result map is a preset reference position;
For a fourth product result of the plurality of product results, responsive to determining that a second pixel element corresponding to the fourth product result is adjacent to the first pixel element, and responsive to determining that the fourth product result corresponds to the first deconvolution kernel element, determining a target position of the fourth product result in a target result graph based on the preset reference position, the relative positional relationship of the first pixel element and the second pixel element, and the deconvolution parameter.
5. The method of claim 4, wherein said determining, for each product result of the plurality of product results, a target position of the product result in a target result graph based on a position of a pixel element corresponding to the product result in the feature graph, a position of a deconvolution kernel element corresponding to the product result in the deconvolution kernel matrix, and a deconvolution parameter further comprises:
for a fifth product result of the plurality of product results, responsive to determining that the fifth product result corresponds to the first pixel element, and responsive to determining that a second deconvolution kernel element corresponding to the fifth product result is adjacent to the first deconvolution kernel element, determining a target position of the fifth product result in a target result graph based on the preset reference position, the relative positional relationship of the first deconvolution kernel element and the second deconvolution kernel element, and the deconvolution parameter.
6. The method of claim 4 or 5, wherein the determining a target position of the fourth product result in a target result graph based on the preset reference position, the relative positional relationship of the first pixel element and the second pixel element, and the deconvolution parameter comprises:
determining the relative offset of the target position of the fourth product result in a target result diagram and the preset reference position based on the relative position relation between the first pixel element and the second pixel element and the deconvolution parameter; and
and determining the target position of the fourth product result in a target result diagram based on the preset reference position and the relative offset.
7. An image deconvolution apparatus, comprising:
an acquisition unit configured to acquire a feature map including a plurality of pixel elements, a deconvolution kernel matrix including a plurality of deconvolution kernel elements, and deconvolution parameters;
a calculation unit configured to calculate, for each deconvolution kernel element of the plurality of deconvolution kernel elements, a product of the deconvolution kernel element and each pixel element of the plurality of pixel elements to obtain a plurality of product results;
a first determining unit configured to determine, for each product result of the plurality of product results, a target position of the product result in a target result graph based on a position of a pixel element corresponding to the product result in the feature graph, a position of a deconvolution kernel element corresponding to the product result in the deconvolution kernel matrix, and the deconvolution parameter; and
And a second determining unit configured to determine the target result map based on respective target positions of the plurality of product results.
8. The apparatus of claim 7, wherein the second determining unit comprises:
a first determining subunit configured to determine, for each product result of the plurality of product results, a storage address of the product result in a target storage unit for storing the target result map, based on a target location of the product result and a preset storage rule; and
and a storage unit configured to store the plurality of product results to the target storage unit based on respective storage addresses of the plurality of product results, so as to obtain the target result graph.
9. The apparatus according to claim 1 or 2, wherein the second determining unit includes:
a calculating subunit configured to calculate a sum of a first product result and a second product result of the plurality of product results in response to determining that respective target positions of the first product result and the second product result are both first positions; and
a second determining subunit configured to determine a first target result element located at the first position in the target result graph as a sum of the first product result and the second product result.
10. The apparatus according to any of claims 7-9, wherein the first determination unit is configured to:
for a third product result of the plurality of product results, in response to determining that a first pixel element corresponding to the third product result is located at a first reference position in the feature map, and in response to determining that a first deconvolution kernel element corresponding to the third product result is located at a second reference position in the deconvolution kernel matrix, determining that a target position of the third product result in a target result map is a preset reference position;
for a fourth product result of the plurality of product results, responsive to determining that a second pixel element corresponding to the fourth product result is adjacent to the first pixel element, and responsive to determining that the fourth product result corresponds to the first deconvolution kernel element, determining a target position of the fourth product result in a target result graph based on the preset reference position, the relative positional relationship of the first pixel element and the second pixel element, and the deconvolution parameter.
11. The apparatus of claim 10, wherein the first determination unit is further configured to:
For a fifth product result of the plurality of product results, responsive to determining that the fifth product result corresponds to the first pixel element, and responsive to determining that a second deconvolution kernel element corresponding to the fifth product result is adjacent to the first deconvolution kernel element, determining a target position of the fifth product result in a target result graph based on the preset reference position, the relative positional relationship of the first deconvolution kernel element and the second deconvolution kernel element, and the deconvolution parameter.
12. The apparatus according to claim 10 or 11, wherein the first determining unit is configured to:
determining the relative offset of the target position of the fourth product result in a target result diagram and the preset reference position based on the relative position relation between the first pixel element and the second pixel element and the deconvolution parameter; and
and determining the target position of the fourth product result in a target result diagram based on the preset reference position and the relative offset.
13. A chip comprising the apparatus of any one of claims 7-12.
14. An electronic device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein the method comprises the steps of
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
15. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-6.
16. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method according to any of claims 1-6.
CN202310143171.9A 2023-02-09 2023-02-09 Image deconvolution method and device, equipment and medium Pending CN116129245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310143171.9A CN116129245A (en) 2023-02-09 2023-02-09 Image deconvolution method and device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310143171.9A CN116129245A (en) 2023-02-09 2023-02-09 Image deconvolution method and device, equipment and medium

Publications (1)

Publication Number Publication Date
CN116129245A true CN116129245A (en) 2023-05-16

Family

ID=86300992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310143171.9A Pending CN116129245A (en) 2023-02-09 2023-02-09 Image deconvolution method and device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116129245A (en)

Similar Documents

Publication Publication Date Title
CN116306396A (en) Chip verification method and device, equipment and medium
CN113723305A (en) Image and video detection method, device, electronic equipment and medium
CN116205819B (en) Character image generation method, training method and device of deep learning model
CN115511779B (en) Image detection method, device, electronic equipment and storage medium
CN114510308B (en) Method, device, equipment and medium for storing application page by mobile terminal
CN115601555A (en) Image processing method and apparatus, device and medium
CN115393514A (en) Training method of three-dimensional reconstruction model, three-dimensional reconstruction method, device and equipment
CN114429678A (en) Model training method and device, electronic device and medium
CN114327718A (en) Interface display method and device, equipment and medium
CN114092556A (en) Method, apparatus, electronic device, medium for determining human body posture
CN116129245A (en) Image deconvolution method and device, equipment and medium
CN115512131B (en) Image detection method and training method of image detection model
CN115762515B (en) Processing and application method, device and equipment for neural network for voice recognition
CN114117046B (en) Data processing method, device, electronic equipment and medium
CN114882331A (en) Image processing method, apparatus, device and medium
CN115170536B (en) Image detection method, training method and device of model
CN115797455B (en) Target detection method, device, electronic equipment and storage medium
CN117196927A (en) Image processing method, device, equipment and medium
CN116541090A (en) Data processing method, device, equipment and medium
CN113961633A (en) Data processing method, system, electronic device and computer storage medium
CN115222598A (en) Image processing method, apparatus, device and medium
CN117196932A (en) Image processing method, device, equipment and medium
CN114758114A (en) Model updating method, image processing method, device, electronic device and medium
CN114998403A (en) Depth prediction method, depth prediction device, electronic apparatus, and medium
CN115981839A (en) Memory allocation method and device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: CW District, 4th floor, building 1, No. 10, Shangdi 10th Street, Haidian District, Beijing 100101

Applicant after: Kunlun core (Beijing) Technology Co.,Ltd.

Address before: Baidu building, No. 10, Shangdi 10th Street, Haidian District, Beijing 100086

Applicant before: Kunlun core (Beijing) Technology Co.,Ltd.