CN112348910A - Method, device, equipment and computer readable medium for acquiring image - Google Patents

Method, device, equipment and computer readable medium for acquiring image Download PDF

Info

Publication number
CN112348910A
CN112348910A CN202011162662.0A CN202011162662A CN112348910A CN 112348910 A CN112348910 A CN 112348910A CN 202011162662 A CN202011162662 A CN 202011162662A CN 112348910 A CN112348910 A CN 112348910A
Authority
CN
China
Prior art keywords
image
type
variance
channel
smoothing parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011162662.0A
Other languages
Chinese (zh)
Inventor
李华夏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202011162662.0A priority Critical patent/CN112348910A/en
Publication of CN112348910A publication Critical patent/CN112348910A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The embodiment of the disclosure discloses a method, a device, an electronic device and a computer readable medium for acquiring an image. One embodiment of the method comprises: receiving an image to be processed and a reference image, wherein the image style of the reference image is the target image style of the image to be processed; extracting image features of an image to be processed to obtain first type image features; extracting image features of the reference image to obtain second type image features; obtaining a smoothing parameter according to the first type image characteristic and the second type image characteristic, wherein the smoothing parameter is used for representing the change rate of converting the second type image characteristic into the first type image characteristic; and generating a target image according to the smoothing parameter, the first type image characteristic, the second type image characteristic and the image to be processed. The method and the device realize the transfer of the image color, can adjust the image color characteristic through the smoothing parameter, improve the transfer effect of the color characteristic and enable the image color of the target image to be more natural.

Description

Method, device, equipment and computer readable medium for acquiring image
Technical Field
Embodiments of the present disclosure relate to the field of image processing, and in particular, to a method, an apparatus, a device, and a computer-readable medium for acquiring an image.
Background
The color migration of an image is a research direction in the field of computer vision, and specifically, the color feature of a reference image is migrated into an image to be processed to obtain a target image with the color feature of the reference image and the shape feature of the image to be processed. The existing color migration method has some problems, for example, if the difference between the color features of the image to be processed and the reference image is large, the color feature migration effect may not be good enough, resulting in the problem that the color of the image after migration is not natural enough.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Some embodiments of the present disclosure propose a method, apparatus, device and computer readable medium for acquiring an image to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method of acquiring an image, the method comprising: receiving an image to be processed and a reference image, wherein the image style of the reference image is the target image style of the image to be processed; extracting image features of an image to be processed to obtain first type image features; extracting image features of the reference image to obtain second type image features; obtaining a smoothing parameter according to the first type image characteristic and the second type image characteristic, wherein the smoothing parameter is used for representing the change rate of converting the second type image characteristic into the first type image characteristic; and generating a target image according to the smoothing parameter, the first type image characteristic, the second type image characteristic and the image to be processed.
In a second aspect, some embodiments of the present disclosure provide an apparatus for acquiring an image, the apparatus comprising: a receiving unit configured to receive an image to be processed and a reference image, an image style of the reference image being a target image style of the image to be processed; the image processing device comprises a first extraction unit, a second extraction unit and a third extraction unit, wherein the first extraction unit is configured to extract image features of an image to be processed to obtain a first type of image features; a second extraction unit configured to extract image features of the reference image, resulting in a second type of image features; the first generation unit is configured to obtain a smoothing parameter according to the first type image feature and the second type image feature, wherein the smoothing parameter is used for representing the change rate of converting the second type image feature into the first type image feature; and the second generation unit is configured to generate a target image according to the smoothing parameter, the first type image characteristic, the second type image characteristic and the image to be processed.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method of acquiring an image as in the first aspect.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method of acquiring an image as in the first aspect.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: the first type image features of the image to be processed and the second type image features of the reference image are extracted, so that the subsequent feature migration can be more accurate. And obtaining a smoothing parameter according to the first type image characteristic and the second type image characteristic, wherein the smoothing parameter can be used for representing the change rate of converting the second type image characteristic into the first type image characteristic. Under the condition that the difference between the color features of the image to be processed and the reference image is large, the color feature migration is adjusted through the smoothing parameters, so that the image color of the target image obtained after the color feature migration is more natural.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a schematic illustration of one application scenario of a method of acquiring an image according to some embodiments of the present disclosure;
FIG. 2 is a flow diagram of some embodiments of a method of acquiring an image according to the present disclosure;
FIG. 3 is a flow diagram of further embodiments of a method of acquiring an image according to the present disclosure;
FIG. 4 is a flow diagram of still further embodiments of methods of acquiring an image according to the present disclosure;
FIG. 5 is a schematic block diagram of some embodiments of an apparatus for acquiring images according to the present disclosure;
FIG. 6 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 is a schematic illustration of one application scenario of a method of acquiring an image according to some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the electronic device 101 may receive the to-be-processed image 102 and the reference image 103. The electronic device 101 may then extract the first type of image features 104 of the image to be processed 102 and the second type of image features 105 of the reference image 103. The first type of image feature 104 may be used to represent a color feature of the image to be processed 102. The second type of image feature 105 may be used to represent a color feature of the reference image 103. The electronic device 101 may process the first type image features 104 and the second type image features 105, and the resulting smoothing parameters 106 may be (0.5 ). Wherein the first "0.5" of (0.5 ) represents the variance smoothing parameters of the first type of image feature 104 and the second type of image feature 105; the second "0.5" represents the mean smoothing parameter for the first type of image feature 104 and the second type of image feature 105. The smoothing parameters 106 may be used to characterize the rate of change of the conversion of the second type of image feature 105 to the first type of image feature 104. The smoothing parameter 106 may control the migration rate of color features, thereby making color migration of the image more natural. The electronic device 101 may perform matrix operation on the image to be processed 102, the first type image feature 104, the second type image feature 105, and the smoothing parameter 106 to obtain a target image 107.
The electronic device 101 may be hardware or software. When the electronic device is hardware, the electronic device may be implemented as a distributed cluster formed by a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the electronic device is embodied as software, it may be installed in the above-listed hardware devices. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of electronic devices in fig. 1 is merely illustrative. There may be any number of electronic devices, as desired for implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of a method of acquiring an image according to the present disclosure is shown. The method for acquiring the image comprises the following steps:
step 201, receiving an image to be processed and a reference image.
In some embodiments, the subject performing the method of obtaining an image (e.g., electronic device 101 shown in fig. 1) may receive the image via a wired connection or a wireless connection. The image comprises an image to be processed and a reference image. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
In some embodiments, the above-mentioned to-be-processed image and the reference image may be obtained from an existing public database, or may be photographed by a camera. The image to be processed and the reference image may be any images. As an example, the image to be processed may be an image showing a chicken, a peacock, a blue sky, or the like, the reference image may be an image showing a forest, a wine bottle, a rainbow, or the like, and the image style of the reference image may be a target image style of the image to be processed. The target image style may be, for example, an antique style, a nostalgic style, or the like.
Step 202, extracting image features of the image to be processed to obtain first type image features.
In some embodiments, based on the image to be processed in step 201, the executing entity (e.g., the electronic device shown in fig. 1) extracts the first type of image features of the image to be processed, and may perform feature extraction on the image to be processed through a network model or a feature extraction algorithm. By way of example, the network model may be a LeNet network, an AlexNet network, a VGG network, a Nin network, a GooLeNet network, or the like. As examples, the feature extraction algorithm may be Scale-invariant Features transform algorithm (Scale-invariant Features transform), accelerated Up Robust Features algorithm (speedUp Robust Features), Directional Gradient Histogram algorithm (Histogram of organized Gradient), Difference of Gaussian function algorithm (Difference of Gaussian). As an example, the first type of image feature may include a mean per channel and a variance per channel of the image to be processed.
And step 203, extracting the image characteristics of the reference image to obtain second type image characteristics.
In some embodiments, based on the reference image in step 201, the executing entity (e.g., the electronic device shown in fig. 1) extracts the first type of image features of the reference image, and the feature extraction may be performed on the reference image through a network model or a feature extraction algorithm. By way of example, the network model may be a LeNet network, an AlexNet network, a VGG network, a Nin network, a GooLeNet network, or the like. As examples, the feature extraction algorithm may be Scale-invariant Features transform algorithm (Scale-invariant Features transform), accelerated Up Robust Features algorithm (speedUp Robust Features), Directional Gradient Histogram algorithm (Histogram of organized Gradient), Difference of Gaussian function algorithm (Difference of Gaussian). As an example, the second type of image feature may include a mean of each channel and a variance of each channel of the reference image.
And 204, obtaining a smoothing parameter according to the first type image characteristic and the second type image characteristic.
In an optional implementation of some embodiments, the variance and mean rates are generated from the first and second types of image features; and generating a smoothing parameter according to the variance change rate and the mean change rate.
Wherein, the variance change rate can be obtained by the following steps: for each channel variance in the first type of image feature, calculating a difference between the variance and the variance of the corresponding channel in the second type of image feature, and setting the ratio of the difference of the variances to the variance as the initial variance transformation ratio of the channel. And obtaining the initial variance transformation rate of other channels in the same way. And setting the initial variance transformation rate with the minimum value as the final variance transformation rate.
Wherein, the mean change rate can be obtained by the following steps: and calculating the difference between the mean value and the mean value of the channels in the corresponding second type image features for the mean value of each channel in the first type image features, and setting the ratio of the difference of the mean values to the mean value as the initial mean value transformation rate of the channel. And obtaining the initial mean value transformation rates of other channels in the same way. And setting the initial mean value transformation rate with the minimum value as the final mean value transformation rate.
For example, on channel a, i.e., based on the variance of channel a in the first type of image feature, the difference between the variance of channel a in the first type of image feature and the variance of channel a in the second type of image feature is calculated, and the ratio of the difference between the variances and the variance of channel a in the first type of image feature is set as the variance change rate on channel a. Similarly, variance change rates of other channels are obtained. And selecting the minimum value in the variance change rates of all the channels as the final variance change rate. And a step of selecting the minimum value in the mean change rates of all the channels as the final mean change rate according to the mean change rate reference variance change rate. As an example, the mean rate of change for each channel may be (0.3, 0.6, 0.7), and the resulting mean rate of change is 0.3. The rate of change of variance for each channel may be (-0.6, 0.2, 0.3), and the resulting rate of change of variance is-0.6.
In an optional implementation manner of some embodiments, a variance smoothing parameter is generated according to a preset variance threshold and a variance change rate; generating a mean value smoothing parameter according to a preset mean value threshold value and a mean value change rate; and generating a smoothing parameter according to the variance smoothing parameter and the mean smoothing parameter.
As an example, if the variance threshold is less than-0.5 or greater than a positive 1, the variance smoothing parameter is 0.5, otherwise the variance smoothing parameter is 0; if the mean threshold is less than-0.1, the mean smoothing parameter is 0.5, otherwise the mean smoothing parameter is 0. If the variance change rate is 0.3 and the mean change rate is-0.3, the variance smoothing parameter is 0 and the mean smoothing parameter is 0.5, and finally, the obtained smoothing parameter is (0, 0.5).
And step 205, generating a target image according to the smoothing parameter, the first type image characteristic, the second type image characteristic and the image to be processed.
In some embodiments, the smoothing parameter, the first type image feature, the second type image feature and the image to be processed may be fused by a matrix operation to obtain the target image.
Some embodiments of the present disclosure provide methods that may derive a first type of image feature based on an image to be processed. A second type of image feature may be derived based on the reference image. Thus, a smoothing parameter may be derived from the first type of image feature and the second type of image feature. The smoothing parameter is used to characterize a rate of change of the second type of image feature to the first type of image feature. Under the condition that the difference between the color features of the image to be processed and the reference image is large, the color feature migration is adjusted through the smoothing parameters, so that the image color of the target image obtained after the color feature migration is more natural.
With further reference to fig. 3, a flow 300 of further embodiments of a method of acquiring an image is shown. The process 300 of the method for acquiring an image includes the following steps:
step 301, receiving an image to be processed and a reference image.
Step 302, performing feature processing on the image to be processed through the type channel to obtain a first type image.
In some embodiments, the type of channel may include at least one of: a luminance channel, a first color channel, a second color channel. As an example, the type of channel described above may be three channels corresponding to the Lab color model, where: l denotes a luminance channel, and a and b are two color channels. Colors included in a may range from dark green (low brightness value) to gray (medium brightness value) to bright pink red (high brightness value), and colors included in b may range from bright blue (low brightness value) to gray (medium brightness value) to yellow (high brightness value).
Step 303, obtaining a first type mean value and a first type variance according to the mean value and the variance of each channel parameter in the type channel of the first type image.
In some embodiments, as an example, the type channel may be a Lab color channel, and the average values of the three channel parameters L, a, and b in the first type image are calculated respectively, so that the first type average value corresponding to the Lab color channel may be [83, 196, 61 ]. Wherein, 83 corresponds to the mean value of the L channel parameters; "196" corresponds to the mean of the a-channel parameters; "61" corresponds to the mean of the b-channel parameters. And then calculating the variances of the parameters of the L channel, the a channel and the b channel in the first type image respectively to obtain the variance of the first type corresponding to the Lab color channel, wherein the variance of the first type corresponding to the Lab color channel can be [13, 21 and 23 ].
Step 304, generating a first type image feature according to the first type mean and the first type variance.
In some embodiments, as an example, the first-type mean may be [53, 136, 62], and the first-type variance may be [20, 31, 14 ]. The first type means are integrated into a vector (53, 136, 62) and the first type variances are combined into a vector (20, 31, 14). And taking the two groups of vectors as the first type image features.
Step 305, extracting the image features of the reference image to obtain the second type image features.
And step 306, obtaining a smoothing parameter according to the first type image characteristic and the second type image characteristic.
And 307, generating a target image according to the smoothing parameter, the first type image characteristic, the second type image characteristic and the image to be processed.
In some embodiments, specific implementations of steps 301, 305, 306, and 307 and technical effects thereof may refer to steps 201, 203, 204, and 205 in the embodiment corresponding to fig. 2, and are not described herein again.
As can be seen from fig. 3, compared with the description of some embodiments corresponding to fig. 2, the flow 300 of the method for acquiring an image in some embodiments corresponding to fig. 3 represents the step of extracting the first type image features. The first type image characteristic is determined by a first type mean value and a first type variance obtained by the image to be processed through a type channel. Therefore, the image characteristics under the type channel can be obtained, and the color characteristics of the image to be processed can be extracted more accurately.
With further reference to fig. 4, a flow 400 of further embodiments of methods of acquiring an image is shown. The process 400 of the method for acquiring an image comprises the following steps:
step 401, receiving an image to be processed and a reference image.
Step 402, extracting image features of the image to be processed to obtain first type image features.
And 403, performing feature processing on the reference image through the type channel to obtain a second type image.
In some embodiments, the type of channel may include at least one of: a luminance channel, a first color channel, a second color channel. As an example, the type of channel described above may be three channels corresponding to the Lab color model, where: l denotes a luminance channel, and a and b are two color channels. Colors included in a may range from dark green (low brightness value) to gray (medium brightness value) to bright pink red (high brightness value), and colors included in b may range from bright blue (low brightness value) to gray (medium brightness value) to yellow (high brightness value).
And step 404, obtaining a second type mean value and a second type variance according to the mean value and the variance of each channel parameter in the type channel of the second type image.
In some embodiments, as an example, the type channel may be a Lab color channel, and the average of the three channel parameters L, a, and b in the second type image is calculated, respectively, and the second type average corresponding to the Lab color channel may be [63, 156, 31 ]. And then calculating the variances of the parameters of the L channel, the a channel and the b channel in the second type image respectively to obtain the variance of the first type corresponding to the Lab color channel, wherein the variance of the first type corresponding to the Lab color channel can be [23, 22 and 34 ].
Step 405, generating a second type image feature according to the second type mean and the second type variance.
In some embodiments, as an example, the second type mean may be [51, 126, 82], and the second type variance may be [11, 32, 34 ]. The second type means are integrated into a vector (51, 126, 82) and the second type variances are combined into a vector (11, 32, 34). And taking the two groups of vectors as the first type image features.
And 406, obtaining a smoothing parameter according to the first type image characteristic and the second type image characteristic.
Step 407, generating a target image according to the smoothing parameter, the first type image feature, the second type image feature and the image to be processed.
In some embodiments, specific implementation of steps 401, 402, 406, and 407 and technical effects thereof may refer to steps 201, 202, 204, and 205 in the embodiment corresponding to fig. 2, and are not described herein again.
As can be seen from fig. 4, compared with the description of some embodiments corresponding to fig. 3, the flow 400 of the method for acquiring an image in some embodiments corresponding to fig. 4 embodies the step of extracting the features of the second type image. The second type image characteristic is determined by a second type mean value and a second type variance obtained by the reference image through a type channel. Therefore, the image characteristics under the type channel can be obtained, and the color characteristics of the reference image can be more accurately extracted.
With further reference to fig. 5, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of an apparatus for acquiring images, which correspond to those of the method embodiments illustrated in fig. 2, and which may be particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for acquiring an image of some embodiments includes: a receiving unit 501 configured to receive an image to be processed and a reference image, an image style of the reference image being a target image style of the image to be processed; a first extraction unit 502 configured to extract image features of an image to be processed, resulting in a first type of image features; a second extraction unit 503 configured to extract image features of the reference image, resulting in a second type of image features; a first generating unit 504 configured to obtain a smoothing parameter according to the first type image feature and the second type image feature, wherein the smoothing parameter is used for representing a change rate of converting the second type image feature into the first type image feature; and a second generating unit 505 configured to generate a target image according to the smoothing parameter, the first type image feature, the second type image feature and the image to be processed.
In an optional implementation of some embodiments, the first extraction unit 502 is further configured to: performing feature processing on an image to be processed through a type channel to obtain a first type image, wherein the type channel comprises at least one of the following items: a luminance channel, a first color channel, a second color channel; obtaining a first type mean value and a first type variance according to the mean value and the variance of each channel parameter in the type channel of the first type image; and generating a first type image feature according to the first type mean value and the first type variance.
In an optional implementation of some embodiments, the second extraction unit 503 is further configured to: performing feature processing on the reference image through a type channel to obtain a second type image; obtaining a second type mean value and a second type variance according to the mean value and the variance of each channel parameter in the type channel of the second type image; and generating a second type of image feature according to the second type of mean value and the second type of variance.
In an optional implementation of some embodiments, the first generating unit 504 is further configured to: generating a variance change rate and a mean change rate according to the first type image characteristics and the second type image characteristics; and generating a smoothing parameter according to the variance change rate and the mean change rate.
In an optional implementation of some embodiments, the first generating unit 504 is further configured to: generating a variance smoothing parameter according to a preset variance threshold and a variance change rate; generating a mean value smoothing parameter according to a preset mean value threshold value and a mean value change rate; and generating a smoothing parameter according to the variance smoothing parameter and the mean smoothing parameter.
It will be appreciated that the storage elements described in the apparatus 500 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 500 and the units included therein, and are not described herein again.
Referring now to fig. 6, a schematic diagram of an electronic device (e.g., the server or terminal device of fig. 1) 600 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device in some embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 609, or installed from the storage device 608, or installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving an image to be processed and a type image, wherein the image style of the type image is the target image style of the image to be processed; extracting a first type image feature and a second type image feature of the type image; fusing the first type image features with the image to be processed to obtain a first image; fusing the second type image characteristics with the image to be processed to obtain a second image; and fusing the first image and the second image in a weight adjusting mode to obtain a target image with a target image style.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes a receiving unit, a first extraction unit, a second extraction unit, a first generation unit, and a second generation unit. The names of these units do not in some cases constitute a limitation on the unit itself, and for example, the receiving unit may also be described as a "unit that receives the image to be processed and the reference image".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
According to one or more embodiments of the present disclosure, there is provided a method of acquiring an image, including: receiving an image to be processed and a reference image, wherein the image style of the reference image is the target image style of the image to be processed; extracting image features of an image to be processed to obtain first type image features; extracting image features of the reference image to obtain second type image features; obtaining a smoothing parameter according to the first type image characteristic and the second type image characteristic, wherein the smoothing parameter is used for representing the change rate of converting the second type image characteristic into the first type image characteristic; and generating a target image according to the smoothing parameter, the first type image characteristic, the second type image characteristic and the image to be processed.
According to one or more embodiments of the present disclosure, extracting image features of an image to be processed to obtain a first type of image features includes: performing feature processing on an image to be processed through a type channel to obtain a first type image, wherein the type channel comprises at least one of the following items: a luminance channel, a first color channel, a second color channel; obtaining a first type mean value and a first type variance according to the mean value and the variance of each channel parameter in the type channel of the first type image; and generating a first type image feature according to the first type mean value and the first type variance.
According to one or more embodiments of the present disclosure, extracting image features of a reference image to obtain second-type image features includes: performing feature processing on the reference image through a type channel to obtain a second type image; obtaining a second type mean value and a second type variance according to the mean value and the variance of each channel parameter in the type channel of the second type image; and generating a second type of image feature according to the second type of mean value and the second type of variance.
According to one or more embodiments of the present disclosure, deriving a smoothing parameter from a first type of image feature and a second type of image feature comprises: generating a variance change rate and a mean change rate according to the first type image characteristics and the second type image characteristics; and generating a smoothing parameter according to the variance change rate and the mean change rate.
According to one or more embodiments of the present disclosure, generating a smoothing parameter according to a variance change rate and a mean change rate includes: generating a variance smoothing parameter according to a preset variance threshold and a variance change rate; generating a mean value smoothing parameter according to a preset mean value threshold value and a mean value change rate; and generating a smoothing parameter according to the variance smoothing parameter and the mean smoothing parameter.
According to one or more embodiments of the present disclosure, there is provided an apparatus for acquiring an image, including: a receiving unit configured to receive an image to be processed and a reference image, an image style of the reference image being a target image style of the image to be processed; the image processing device comprises a first extraction unit, a second extraction unit and a third extraction unit, wherein the first extraction unit is configured to extract image features of an image to be processed to obtain a first type of image features; a second extraction unit configured to extract image features of the reference image, resulting in a second type of image features; the first generation unit is configured to obtain a smoothing parameter according to the first type image feature and the second type image feature, wherein the smoothing parameter is used for representing the change rate of converting the second type image feature into the first type image feature; and the second generation unit is configured to generate a target image according to the smoothing parameter, the first type image characteristic, the second type image characteristic and the image to be processed.
According to one or more embodiments of the present disclosure, the first extraction unit is further configured to: performing feature processing on an image to be processed through a type channel to obtain a first type image, wherein the type channel comprises at least one of the following items: a luminance channel, a first color channel, a second color channel; obtaining a first type mean value and a first type variance according to the mean value and the variance of each channel parameter in the type channel of the first type image; and generating a first type image feature according to the first type mean value and the first type variance.
According to one or more embodiments of the present disclosure, the second extraction unit is further configured to: performing feature processing on the reference image through a type channel to obtain a second type image; obtaining a second type mean value and a second type variance according to the mean value and the variance of each channel parameter in the type channel of the second type image; and generating a second type of image feature according to the second type of mean value and the second type of variance.
According to one or more embodiments of the present disclosure, the first generating unit is further configured to: generating a variance change rate and a mean change rate according to the first type image characteristics and the second type image characteristics; and generating a smoothing parameter according to the variance change rate and the mean change rate.
According to one or more embodiments of the present disclosure, the first generating unit is further configured to: generating a variance smoothing parameter according to a preset variance threshold and a variance change rate; generating a mean value smoothing parameter according to a preset mean value threshold value and a mean value change rate; and generating a smoothing parameter according to the variance smoothing parameter and the mean smoothing parameter.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (12)

1. A method of acquiring an image, comprising:
receiving an image to be processed and a reference image, wherein the image style of the reference image is the target image style of the image to be processed;
extracting image features of the image to be processed to obtain first type image features;
extracting image features of the reference image to obtain second type image features;
obtaining a smoothing parameter according to the first type image feature and the second type image feature, wherein the smoothing parameter is used for representing the change rate of converting the second type image feature into the first type image feature;
and generating a target image according to the smoothing parameter, the first type image characteristic, the second type image characteristic and the image to be processed.
2. The method according to claim 1, wherein the extracting image features of the image to be processed to obtain first type image features comprises:
performing feature processing on the image to be processed through a type channel to obtain a first type image, wherein the type channel comprises at least one of the following items: a luminance channel, a first color channel, a second color channel;
obtaining a first type mean value and a first type variance according to the mean value and the variance of each channel parameter in the type channel of the first type image;
and generating the first type image feature according to the first type mean value and the first type variance.
3. The method of claim 2, wherein the extracting image features of the reference image to obtain second type image features comprises:
performing feature processing on the reference image through the type channel to obtain a second type image;
obtaining a second type mean value and a second type variance according to the mean value and the variance of each channel parameter in the type channel of the second type image;
and generating the second type image feature according to the second type mean value and the second type variance.
4. The method of claim 3, wherein said deriving a smoothing parameter from said first and second types of image features comprises:
generating a variance change rate and a mean change rate according to the first type image features and the second type image features;
and generating a smoothing parameter according to the variance change rate and the mean change rate.
5. The method of claim 4, wherein said generating a smoothing parameter from said rate of variance change and said rate of mean change comprises:
generating a variance smoothing parameter according to a preset variance threshold and the variance change rate;
generating a mean value smoothing parameter according to a preset mean value threshold value and the mean value change rate;
and generating a smoothing parameter according to the variance smoothing parameter and the mean smoothing parameter.
6. An apparatus for acquiring an image, comprising:
a receiving unit configured to receive an image to be processed and a reference image, an image style of the reference image being a target image style of the image to be processed;
the first extraction unit is configured to extract image features of the image to be processed to obtain first type image features;
a second extraction unit configured to extract image features of the reference image, resulting in a second type of image features;
the first generation unit is configured to obtain a smoothing parameter according to the first type image feature and the second type image feature, wherein the smoothing parameter is used for representing the change rate of converting the second type image feature into the first type image feature;
a second generating unit configured to generate a target image according to the smoothing parameter, the first type image feature, the second type image feature, and the image to be processed.
7. The apparatus for acquiring an image according to claim 6, wherein the first extraction unit is further configured to:
performing feature processing on the image to be processed through a type channel to obtain a first type image, wherein the type channel comprises at least one of the following items: a luminance channel, a first color channel, a second color channel;
obtaining a first type mean value and a first type variance according to the mean value and the variance of each channel parameter in the type channel of the first type image;
and generating the first type image feature according to the first type mean value and the first type variance.
8. The apparatus for acquiring an image according to claim 7, wherein the second extraction unit is further configured to:
performing feature processing on the reference image through the type channel to obtain a second type image;
obtaining a second type mean value and a second type variance according to the mean value and the variance of each channel parameter in the type channel of the second type image;
and generating the second type image feature according to the second type mean value and the second type variance.
9. The apparatus for acquiring an image according to claim 8, wherein the first generating unit is further configured to:
generating a variance change rate and a mean change rate according to the first type image features and the second type image features;
and generating a smoothing parameter according to the variance change rate and the mean change rate.
10. The apparatus for acquiring an image according to claim 9, wherein the first generating unit is further configured to:
generating a variance smoothing parameter according to a preset variance threshold and the variance change rate;
generating a mean value smoothing parameter according to a preset mean value threshold value and the mean value change rate;
and generating a smoothing parameter according to the variance smoothing parameter and the mean smoothing parameter.
11. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1 to 5.
CN202011162662.0A 2020-10-27 2020-10-27 Method, device, equipment and computer readable medium for acquiring image Pending CN112348910A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011162662.0A CN112348910A (en) 2020-10-27 2020-10-27 Method, device, equipment and computer readable medium for acquiring image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011162662.0A CN112348910A (en) 2020-10-27 2020-10-27 Method, device, equipment and computer readable medium for acquiring image

Publications (1)

Publication Number Publication Date
CN112348910A true CN112348910A (en) 2021-02-09

Family

ID=74360242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011162662.0A Pending CN112348910A (en) 2020-10-27 2020-10-27 Method, device, equipment and computer readable medium for acquiring image

Country Status (1)

Country Link
CN (1) CN112348910A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991153A (en) * 2021-03-11 2021-06-18 Oppo广东移动通信有限公司 Image color migration method and device, storage medium and electronic equipment
CN113284206A (en) * 2021-05-19 2021-08-20 Oppo广东移动通信有限公司 Information acquisition method and device, computer readable storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991153A (en) * 2021-03-11 2021-06-18 Oppo广东移动通信有限公司 Image color migration method and device, storage medium and electronic equipment
CN113284206A (en) * 2021-05-19 2021-08-20 Oppo广东移动通信有限公司 Information acquisition method and device, computer readable storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN109816589B (en) Method and apparatus for generating cartoon style conversion model
CN111784565B (en) Image processing method, migration model training method, device, medium and equipment
CN109800732B (en) Method and device for generating cartoon head portrait generation model
CN110021052B (en) Method and apparatus for generating fundus image generation model
CN111784712B (en) Image processing method, device, equipment and computer readable medium
CN111784566A (en) Image processing method, migration model training method, device, medium and equipment
WO2020093724A1 (en) Method and device for generating information
CN110211030B (en) Image generation method and device
WO2022142875A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN111757100B (en) Method and device for determining camera motion variation, electronic equipment and medium
CN112348910A (en) Method, device, equipment and computer readable medium for acquiring image
CN112418249A (en) Mask image generation method and device, electronic equipment and computer readable medium
CN112419179A (en) Method, device, equipment and computer readable medium for repairing image
CN112241941B (en) Method, apparatus, device and computer readable medium for acquiring image
CN112258622A (en) Image processing method, image processing device, readable medium and electronic equipment
CN110097520B (en) Image processing method and device
CN112241744A (en) Image color migration method, device, equipment and computer readable medium
CN110084835B (en) Method and apparatus for processing video
CN111726476B (en) Image processing method, device, equipment and computer readable medium
CN114723600A (en) Method, device, equipment, storage medium and program product for generating cosmetic special effect
CN112488947A (en) Model training and image processing method, device, equipment and computer readable medium
CN110209851B (en) Model training method and device, electronic equipment and storage medium
CN112070888A (en) Image generation method, device, equipment and computer readable medium
CN111815508A (en) Image generation method, device, equipment and computer readable medium
CN110825480A (en) Picture display method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination