WO2022052669A1 - 背景图像生成方法、装置、存储介质与电子设备 - Google Patents

背景图像生成方法、装置、存储介质与电子设备 Download PDF

Info

Publication number
WO2022052669A1
WO2022052669A1 PCT/CN2021/110248 CN2021110248W WO2022052669A1 WO 2022052669 A1 WO2022052669 A1 WO 2022052669A1 CN 2021110248 W CN2021110248 W CN 2021110248W WO 2022052669 A1 WO2022052669 A1 WO 2022052669A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
local area
target
label
preset
Prior art date
Application number
PCT/CN2021/110248
Other languages
English (en)
French (fr)
Inventor
张曦
李羿瑺
颜波
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP21865725.2A priority Critical patent/EP4213098A4/en
Publication of WO2022052669A1 publication Critical patent/WO2022052669A1/zh
Priority to US18/148,429 priority patent/US20230222706A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/22Cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to a background image generation method, a background image generation apparatus, a computer-readable storage medium, and an electronic device.
  • the present disclosure provides a background image generation method, a background image generation apparatus, a computer-readable storage medium, and an electronic device.
  • a method for generating a background image comprising: acquiring a first image; searching for a second image matching the first image in a preset image library; The second image is subjected to toning processing to generate a background image.
  • an apparatus for generating a background image comprising a processor and a memory, wherein the processor is configured to execute the following program modules stored in the memory: an image acquisition module configured to acquire a first image; an image matching module configured to search a preset image library for a second image matching the first image; a color tone processing module configured to perform color tone processing on the second image according to the first image , to generate a background image.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the background image generation method of the first aspect and possible implementations thereof.
  • an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to execute the executable instructions to The background image generation method of the above-mentioned first aspect and possible implementations thereof are performed.
  • FIG. 1 shows a schematic diagram of a system architecture in this exemplary embodiment
  • FIG. 2 shows a structural diagram of an electronic device in this exemplary embodiment
  • FIG. 3 shows a flowchart of a method for generating a background image in this exemplary embodiment
  • Fig. 4 shows the flow chart of generating the first image in this exemplary embodiment
  • Fig. 5 shows the flow chart of intercepting the first image in this exemplary embodiment
  • FIG. 6 shows a schematic diagram of generating a first image in this exemplary embodiment
  • Fig. 7 shows the flow chart of searching for the second image in this exemplary embodiment
  • FIG. 8 shows a schematic diagram of image recognition processing in this exemplary embodiment
  • FIG. 9 shows a structural block diagram of an apparatus for generating a background image in this exemplary embodiment
  • FIG. 10 shows a structural block diagram of another background image generating apparatus in this exemplary embodiment.
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • Example embodiments can be embodied in various forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
  • the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • numerous specific details are provided in order to give a thorough understanding of the embodiments of the present disclosure.
  • those skilled in the art will appreciate that the technical solutions of the present disclosure may be practiced without one or more of the specific details, or other methods, components, devices, steps, etc. may be employed.
  • well-known solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
  • the background images of mobile phones generally include the background of the lock screen and the desktop, which are one of the things that users see the most when using mobile phones every day. It can stay on always on, making the smartwatch a decoration. It can be seen that the background image has a great influence on the user's aesthetic feeling.
  • background images usually come from two ways: one is the background style that comes with the system, generally there are solid color backgrounds, gradient backgrounds, and backgrounds with dynamic changes of simple patterns; Select an image from the album and set it as the background.
  • the background images from these two sources have certain limitations: the types of background styles that come with the system are very limited; and most of the pictures in the user's album are taken in daily life, which inevitably has shooting defects and is not suitable for backgrounds.
  • exemplary embodiments of the present disclosure provide a method for generating a background image, and the generated background image can be applied to: the lock screen background and desktop background of a mobile phone or tablet computer, the dial image of a smart watch, and the screen protector of a device. background, etc.
  • FIG. 1 shows a schematic diagram of a system architecture of an exemplary embodiment of the present disclosure.
  • the system architecture 100 may include: a terminal device 110 and a server 120.
  • the terminal device 110 may be a mobile phone, a tablet computer, a digital camera, a personal computer, a smart wearable device, or the like.
  • the methods provided by the embodiments of the present disclosure can be executed by the terminal device 110 alone.
  • a background image is generated by executing the background image generation method
  • the methods provided by the embodiments of the present disclosure can also be executed by the server 120.
  • the terminal device 110 captures an image, it is transmitted to the server 120, and the server 120 generates a background image by executing the background image generation method, and returns the background image to the terminal device 110 for display.
  • This disclosure does not limit this.
  • Exemplary embodiments of the present disclosure also provide an electronic device for implementing the above background image generation method.
  • the electronic device may be the terminal device 110 or the server 120 in FIG. 1 .
  • the electronic device includes at least a processor and a memory, where the memory is used to store executable instructions of the processor, and can also store application data, such as image data, etc.; the processor is configured to execute the background image generation method by executing the executable instructions.
  • the mobile terminal 200 may specifically include: a processor 210, an internal memory 221, an external memory interface 222, a USB (Universal Serial Bus, Universal Serial Bus) interface 230, a charging management module 240, a power management module 241, Battery 242, Antenna 1, Antenna 2, Mobile Communication Module 250, Wireless Communication Module 260, Audio Module 270, Speaker 271, Receiver 272, Microphone 273, Headphone Interface 274, Sensor Module 280, Display Screen 290, Camera Module 291, Indication 292, a motor 293, a key 294, a Subscriber Identification Module (SIM) card interface 295, and the like.
  • SIM Subscriber Identification Module
  • the processor 210 may include one or more processing units, for example, the processor 210 may include an application processor (Application Processor, AP), a modem processor, a graphics processor (Graphics Processing Unit, GPU), an image signal processor (Image Signal Processor, ISP), controller, encoder, decoder, digital signal processor (Digital Signal Processor, DSP), baseband processor and/or neural network processor (Neural-Network Processing Unit, NPU), etc.
  • the AP, GPU, etc. can perform processing on image data, such as target detection and recognition processing on the image.
  • the encoder can encode (ie compress) the image or video data, for example, the mobile terminal 200 compresses the captured image, and transmits the compressed data to the server to reduce the bandwidth occupied by the data transmission; the decoder can The image or video data is decoded (ie, decompressed) to restore the image or video.
  • the mobile terminal 200 may support one or more encoders and decoders. In this way, the mobile terminal 200 can process images or videos in various encoding formats, such as: JPEG (Joint Photographic Experts Group, Joint Photographic Experts Group), PNG (Portable Network Graphics, Portable Network Graphics), BMP (Bitmap, Bitmap), etc. Image format, MPEG (Moving Picture Experts Group, Moving Picture Experts Group) 1, MPEG2, H.263, H.264, HEVC (High Efficiency Video Coding, High Efficiency Video Coding) and other video formats.
  • the processor 210 may include one or more interfaces through which connections are formed with other components of the mobile terminal 200 .
  • the external memory interface 222 may be used to connect an external memory card.
  • the internal memory 221 may be used to store computer executable program codes, and may also store data (such as images, videos) and the like created during the use of the mobile terminal 200 .
  • the USB interface 230 is an interface conforming to the USB standard specification, and can be used to connect a charger to charge the mobile terminal 200, and can also be connected to an earphone or other electronic devices.
  • the charging management module 240 is used to receive charging input from the charger. While charging the battery 242, the charging management module 240 can also supply power to the device through the power management module 241; the power management module 241 can also monitor the state of the battery.
  • the wireless communication function of the mobile terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • the mobile communication module 250 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the mobile terminal 200 .
  • the wireless communication module 260 can provide applications on the mobile terminal 200 including wireless local area networks (Wireless Local Area Networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) networks), Bluetooth (Bluetooth, BT), global navigation satellite Wireless communication solutions such as Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), and Infrared (IR).
  • WLAN wireless Local Area Networks
  • WLAN wireless Local Area Networks
  • WLAN wireless local Area Networks
  • WLAN wireless local Area Networks
  • WLAN wireless local Area Networks
  • Bluetooth Bluetooth
  • global navigation satellite Wireless communication solutions such as Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), and Infrared (IR).
  • GNSS Global Navigation Satellite System
  • FM Frequency Modulation
  • NFC Near Field Communication
  • IR Infrared
  • the mobile terminal 200 can realize the display function through the GPU, the display screen 290 and the application processor, etc., can realize the shooting function through the ISP, the camera module 291, the encoder, the decoder, the GPU, the display screen 290 and the application processor, etc., and can also realize the shooting function.
  • the audio function is realized by the audio module 270 , the speaker 271 , the receiver 272 , the microphone 273 , the headphone interface 274 and the application processor.
  • the sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyro sensor 2803, an air pressure sensor 2804, and the like.
  • the indicator 292 can be an indicator light, which can be used to indicate the charging status, the change of power, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the motor 293 can generate vibration prompts, and can also be used for touch vibration feedback and the like.
  • the keys 294 include a power-on key, a volume key, and the like.
  • the mobile terminal 200 may support one or more SIM card interfaces 295 for connecting SIM cards.
  • FIG. 3 shows an exemplary flow of a method for generating a background image, which may include the following steps S310 to S330:
  • Step S310 acquiring a first image
  • Step S320 searching for a second image matching the first image in a preset image library
  • Step S330 performing toning processing on the second image according to the first image to generate a background image.
  • this exemplary embodiment proposes a new method for generating a background image, which combines the features of a first image and a second image, where the first image includes the user's preferences for image content and image color, and the second image It is a standardized image suitable as a background, free of defects such as jitter and noise in image shooting.
  • the background image thus generated satisfies the characteristics of pure background content and bright theme, and can meet the personalized needs of users, and at the same time brings the fun of changing the background.
  • the implementation process of this solution is simple, and the user does not need to perform a large number of operations, nor does it need to perform complex image processing, which reduces the time and hardware cost of acquiring the background image.
  • step S310 a first image is acquired.
  • the first image refers to a reference image used to generate a background image, which is usually an image subjectively designated by a user, and reflects the user's preferences in terms of image content and image color.
  • the first image includes, but is not limited to: an image taken by a user; an image selected by a user in a terminal device, such as an image downloaded on the network, an image in an album, etc.; an image obtained by performing certain processing on the image taken or selected by the user .
  • the first image may be any type of image, such as a person image, a landscape image, an architectural image, and the like.
  • step S310 may include the following steps S410 and S420:
  • Step S410 determining at least one local area image from the original image
  • Step S420 generating a first image according to the above-mentioned local area image.
  • the original image is an initial image used to generate the first image, which may be an image captured or selected by the user, or an image automatically designated by the system, for example, the original image is acquired in response to a user's preset operation.
  • the preset operation may include: controlling the camera to collect the original image, for example, the user opens the camera interface, takes the preview image collected by the camera as the original image, or shoots an image as the original image; select the original image in the local image library or application,
  • the local image library is another image library different from the above-mentioned preset image library, such as the album on the mobile phone.
  • the user can select an image in the local album of the mobile phone as the original image, or select an image in the application An image (an image sent by another user, an image in a program page, etc.) as the original image, etc. Thereby, it is ensured that the original image conforms to the user's preference.
  • the user sets the theme switching rule of the background image in advance, for example, switching according to time, weather, etc.
  • the system can obtain a matching original image according to the current time, weather, etc. If the current date is August 20, 2020, the image taken on August 20, 2019 can be used as the original image; You can search the Internet for rain-themed images or use rain-related images in your photo album as original images, and so on. Thereby, the automatic selection of the original image can be realized, which simplifies the user's operation.
  • the process of FIG. 4 actually takes a screenshot of the original image, and generates a first image according to the captured local area image. Since the original image usually contains a lot of image content, by cutting out the local area image from the original image, an image with relatively pure content and a single theme can be obtained.
  • the at least one partial area image may be determined in response to a user's area marquee selection operation in the original image.
  • the area selection operation means that the user selects a local area in the original image (usually a rectangular frame), which allows the user to manually take screenshots in the original image, and supports the user to capture multiple local area images at the same time, so that the user can capture himself.
  • Favorite local area image as a benchmark for subsequent background image generation.
  • content recognition may also be performed on the original image to segment the local area image. For example, the foreground part is identified from the original image, and the local area image where the foreground part is located is intercepted.
  • the local area image After the local area image is obtained, it can be directly used as the first image, or can be further processed to obtain the first image after processing.
  • step S420 may include the following steps S510 and S520:
  • Step S510 performing target detection on the local area image to obtain the bounding box of each target in the local area image
  • Step S520 capturing the image of the bounding box as the first image.
  • the target can be any identifiable object in the image, such as plants, buildings, vehicles, etc. Generally, when a user manually intercepts a local area image, there may be errors, so that the local area image contains irrelevant elements other than the target. After processing the local area image by the method of Fig. 5, a pure image about the target can be obtained. Target detection can be done through YOLO (You Only Look Once, a real-time target detection algorithm, including v1, v2, v3 and other versions, this disclosure can use any version), SSD (Single Shot MultiBox Detector, single-step multi-box target detection) After the algorithm model is implemented, the bounding box (or bounding box) of the target is output in the local area image.
  • YOLO You Only Look Once, a real-time target detection algorithm, including v1, v2, v3 and other versions, this disclosure can use any version
  • SSD Single Shot MultiBox Detector, single-step multi-box target detection
  • Capture the image in the bounding box to obtain the first image which is equivalent to taking a more detailed screenshot on the basis of the local area image to obtain the first image with the target as the main content, thereby improving the accuracy of the first image and reducing irrelevant content. Interference with subsequent processing.
  • the bounding box of each target can be output separately.
  • screenshots of each bounding box can be taken separately to obtain multiple first images; these bounding boxes can also be screened, for example, only the image of the bounding box with the largest area is taken as the first image, that is, only A first image is generated; the bounding box with an area larger than a certain threshold is retained, and its image is intercepted as the first image; the user manually selects one or more bounding boxes in all the bounding boxes, and intercepts its image as the first image, etc.
  • This disclosure does not limit this.
  • step S510 may be implemented in the following manner:
  • the candidate boxes of the same target are merged to obtain the bounding boxes of each target in the local area image.
  • NMS Non-Maximum Suppression, non-maximum suppression
  • NMS Non-Maximum Suppression, non-maximum suppression
  • candidate frames of the same target can be merged to obtain a final bounding frame, thereby improving the accuracy of the bounding frame and the first image.
  • the present disclosure does not limit the specific merging manner of candidate frames.
  • maximum merging can be used, that is: obtain the coordinates of each corner point of the candidate frame of the same target in the local area image; among the coordinates of each corner point, select the smallest abscissa and the smallest ordinate to form the lower left Corner point, select the largest abscissa and the largest ordinate to form the upper right corner point; according to the lower left corner point and the upper right corner point, determine the bounding box of the rectangle in the local area image, usually the line connecting the lower left corner point and the upper right corner point For the diagonal, determine the rectangular area, that is, the bounding box of the above target.
  • Average merging can also be used, that is, taking the average value of the lower left corner points of each candidate frame, taking the average value of the upper right corner points of each candidate frame, and taking the line connecting the two average points as the diagonal line to determine the rectangular area, that is, the target the bounding box.
  • FIG. 6 shows the process of target detection taking the MobileNet-SSD network (a lightweight SSD network model for mobile terminals) as an example, including:
  • Extract features from images through the network specifically, extract features and identify each candidate frame image, exclude candidate frames in the background, and optimize the position and size of candidate frames containing the target;
  • step S320 a second image matching the first image is searched in the preset image library.
  • the preset image library is a pre-established standardized image library.
  • the standardization here mainly refers to the pure content of the image, the single theme, and the distinctive style, and there is no defect such as jitter and noise in the daily captured image, which is suitable for the background image.
  • preset image libraries of different styles can be established, such as an illustration-style image library, an animation-style image library, a real-shot landscape image library, and the like.
  • the preset image library may include a vector image library, such as an image library formed by images in SVG (Scalable Vector Graphics, Scalable Vector Graphics) format, and the vector graphics may present a flat illustration. style, especially suitable for forming flat background images.
  • searching for a matching second image in the preset image library is equivalent to replacing the first image with "defect-free" normalized image.
  • the similarity between the first image and the images in the preset image library may be calculated, and the image with the highest similarity may be determined as the second image matching the first image. For example, calculate the SSIM (Structural SIMilarity) of the first image and the images in the preset image library; or extract features from the first image and the images in the preset image library respectively, and then calculate the cosine of the two feature vectors Similarity or Euclidean distance, etc.
  • SSIM Structuretural SIMilarity
  • step S320 may include the following steps S710 and S720:
  • Step S710 performing identification processing on the first image to obtain a label of the first image
  • Step S720 searching for at least one image corresponding to the label of the first image in the preset image library as the second image.
  • the image recognition processing can be implemented by an image classification algorithm, such as a MobileNet classifier, etc., and the obtained label is the class label for classifying the first image. Therefore, images with the same label are filtered out in the preset image library, and the second image is searched in it (for example, the search can be performed according to the similarity of the images), that is, the second image should belong to the same category as the first image. This can narrow down the search for the second image in the preset image library and improve the accuracy of the search results.
  • an image classification algorithm such as a MobileNet classifier, etc.
  • multi-level labels may be set for images, ranging from coarse classification to fine classification.
  • the first image may include a primary label and a secondary label.
  • the first-level identification process can be performed on the first image to obtain the first-level label of the first image; then, the second-level identification process is performed on the first image under the first-level label to obtain the first image secondary label.
  • the first-level label is equivalent to prior information, which helps to improve the accuracy of the second-level identification processing.
  • different sub-networks are used in the second-level identification processing, each sub-network corresponds to a first-level label, and when the first-level labels are different, the used sub-networks are also different, so that more targeted identification processing can be realized.
  • the last level label of the first image is obtained.
  • searching for the second image at least one image corresponding to the last-level label of the first image may be searched in a preset image library. Taking the "Luluo" image in Fig. 8 as an example, after identifying the secondary label "Luluo", the image labeled "Luluo" can be filtered out in the preset image library, and the second image can be further obtained therefrom.
  • step S330 toning processing is performed on the second image according to the first image to generate a background image.
  • the tone of the first image is applied to the second image.
  • the green color of the radish in the first image is different from the color of the green leaves in the second image, and the green in the second image can be replaced by the green in the first image, which is more in line with the user's preference.
  • step S330 may include:
  • the number of dominant colors in the first image determine the same number of layers in the second image as the target layers in the second image
  • the main color of the first image may be one or more colors with the highest proportion (referring to the pixel ratio) in the first image. For example: select a color with the highest proportion as the main color; or select the three colors with the highest proportion as the main color; or select multiple colors with a proportion of more than 20% as the main color, etc. Do limit.
  • the target layer in the second image may be the main layer in the second image, for example, the layer in the foreground part.
  • an image is generally formed by superimposing multiple layers. Different layers are arranged in the order from foreground to background. One or more layers in the front can be selected as the target layer.
  • the main color can be filled into the target layer to replace the original color of the target layer.
  • the toning process may include:
  • the correspondence between the main colors and the target layer is determined according to the order of the main colors and the order of the target layer, and each main color is used to replace the color of the corresponding target layer.
  • extract 3 main colors in the first image and record them as main color 1, main color 2, and main color 3 in descending order of the proportion of the main colors; in the second image, select Arrange the top 3 layers as the target layer, and record them as layer 1, layer 2, and layer 3 in order; then replace all the colors in layer 1 as main color 1, and in layer 2 All the colors in the layer 3 are replaced with the main color 2, and all the colors in the layer 3 are replaced with the main color 3, so that an image very similar to the color style of the first image is obtained.
  • the final background image it is equivalent to combining the first image.
  • the color and the content of the second image are more in line with the user's preference.
  • the background image generating apparatus 900 may include a processor 910 and a memory 920, and the memory 920 stores the following program modules:
  • an image acquisition module 921 configured to acquire a first image
  • an image matching module 922 configured to search for a second image matching the first image in a preset image library
  • Toning processing module 923 configured to perform toning processing on the second image according to the first image to generate a background image
  • the processor 910 is used to execute the above program modules.
  • the image acquisition module 921 is configured to:
  • the first image is generated from the above-mentioned local area image.
  • the image acquisition module 921 is further configured to:
  • the original image is acquired in response to the user's preset operation.
  • the above-mentioned preset operation includes: controlling the camera to collect the original image, or selecting the original image from a local image library or an application program.
  • the above-mentioned determining at least one local area image from the original image includes:
  • At least one local area image is determined in response to an area marquee selection operation performed by the user in the original image.
  • the above-mentioned generating the first image according to the local area image includes:
  • the above-mentioned target detection is performed on the local area image to obtain the bounding box of each target in the local area image, including:
  • the candidate boxes of the same target are merged to obtain the bounding boxes of each target in the local area image.
  • the above candidate frames of the same target are merged to obtain the bounding frames of each target in the local area image, including:
  • the above-mentioned captured image of the bounding box, as the first image includes:
  • the image of the bounding box with the largest area is taken as the first image.
  • the image matching module 922 is configured to:
  • the similarity between the first image and the images in the preset image library is calculated, and the image with the highest similarity is selected as the second image matching the first image.
  • the image matching module 922 is configured to:
  • the label of the first image includes a primary label and a secondary label.
  • the above-mentioned identification processing is performed on the first image to obtain the label of the first image, including:
  • the second-level identification processing is performed on the first image to obtain the second-level label of the first image.
  • the above-mentioned searching for at least one image corresponding to the label of the first image in the preset image library includes:
  • the color tone processing module 923 is configured to:
  • the number of dominant colors in the first image determine the same number of layers in the second image as the target layers in the second image
  • the above-mentioned use of the main color of the first image to replace the color of the target layer in the second image includes:
  • the correspondence between the main colors and the target layer is determined according to the order of the main colors and the order of the target layer, and each main color is used to replace the color of the corresponding target layer.
  • the dominant color of the first image includes one or more colors that occupy the highest proportion in the first image.
  • the preset image library includes a vector image library.
  • the background image generating apparatus 1000 may include:
  • an image acquisition module 1010 configured to acquire a first image
  • an image matching module 1020 configured to search for a second image matching the first image in a preset image library
  • the toning processing module 1030 is configured to perform toning processing on the second image according to the first image to generate a background image.
  • the image acquisition module 1010 is configured to:
  • the first image is generated from the above-mentioned local area image.
  • the image acquisition module 1010 is further configured to:
  • the original image is acquired in response to the user's preset operation.
  • the above-mentioned preset operation includes: controlling the camera to collect the original image, or selecting the original image from a local image library or an application program.
  • the above-mentioned determining at least one local area image from the original image includes:
  • At least one local area image is determined in response to an area marquee selection operation performed by the user in the original image.
  • the above-mentioned generating the first image according to the local area image includes:
  • the above-mentioned target detection is performed on the local area image to obtain the bounding box of each target in the local area image, including:
  • the candidate boxes of the same target are merged to obtain the bounding boxes of each target in the local area image.
  • the above candidate frames of the same target are merged to obtain the bounding frames of each target in the local area image, including:
  • the above-mentioned captured image of the bounding box, as the first image includes:
  • the image of the bounding box with the largest area is taken as the first image.
  • the image matching module 1020 is configured to:
  • the similarity is calculated between the first image and the images in the preset image library, and the image with the highest similarity is selected as the second image matching the first image.
  • the image matching module 1020 is configured to:
  • the label of the first image includes a primary label and a secondary label.
  • the above-mentioned identification processing is performed on the first image to obtain the label of the first image, including:
  • the second-level identification processing is performed on the first image to obtain the second-level label of the first image.
  • the above-mentioned searching for at least one image corresponding to the label of the first image in the preset image library includes:
  • the color tone processing module 1030 is configured to:
  • the number of dominant colors in the first image determine the same number of layers in the second image as the target layers in the second image
  • the above-mentioned use of the main color of the first image to replace the color of the target layer in the second image includes:
  • the correspondence between the main colors and the target layer is determined according to the order of the main colors and the order of the target layer, and each main color is used to replace the color of the corresponding target layer.
  • the dominant color of the first image includes one or more colors that occupy the highest proportion in the first image.
  • the preset image library includes a vector image library.
  • Exemplary embodiments of the present disclosure also provide a computer-readable storage medium, which can be implemented in the form of a program product, including program codes for causing the terminal device to execute this specification when the program product is run on a terminal device.
  • the steps according to various exemplary embodiments of the present disclosure described in the "Example Method" section above may be performed, for example, by any one or more of the steps in FIG. 3 , FIG. 4 , FIG. 5 , or FIG. 7 .
  • the program product can take the form of a portable compact disk read only memory (CD-ROM) and include program code, and can be run on a terminal device, such as a personal computer.
  • CD-ROM portable compact disk read only memory
  • the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • the program product may employ any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer readable signal medium may include a propagated data signal in baseband or as part of a carrier wave with readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a readable signal medium can also be any readable medium, other than a readable storage medium, that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • Program code embodied on a readable medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Program code for performing the operations of the present disclosure may be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural programming Language - such as the "C" language or similar programming language.
  • the program code may execute entirely on the user computing device, partly on the user device, as a stand-alone software package, partly on the user computing device and partly on a remote computing device, or entirely on the remote computing device or server execute on.
  • the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (eg, using an Internet service provider business via an Internet connection).
  • LAN local area network
  • WAN wide area network
  • aspects of the present disclosure may be implemented as a system, method or program product. Therefore, various aspects of the present disclosure can be embodied in the following forms: a complete hardware implementation, a complete software implementation (including firmware, microcode, etc.), or a combination of hardware and software aspects, which may be collectively referred to herein as implementations "circuit", “module” or "system”.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种背景图像生成方法、背景图像生成装置、计算机可读存储介质与电子设备。该背景图像生成方法包括:获取第一图像(S310);在预设图像库中查找与第一图像匹配的第二图像(S320);根据第一图像对第二图像进行调色处理,生成背景图像(S330)。满足了用户对于背景图像的个性化需求,降低了获取背景图像的时间与硬件成本。 (图3)

Description

背景图像生成方法、装置、存储介质与电子设备
本申请要求于2020年09月14日提交的,申请号为202010963262.3,名称为“背景图像生成方法、装置、存储介质与电子设备”的中国专利申请的优先权,该中国专利申请的全部内容通过引用结合在本文中。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种背景图像生成方法、背景图像生成装置、计算机可读存储介质与电子设备。
背景技术
随着智能终端设备的发展,用户对于设备美观方面的要求越来越高。其中,设备显示界面的背景对于美观的影响较大,且用户对于背景的需求也越来越偏向个性化与差异化。如何得到满足用户个性化需求的背景图像,是目前亟待解决的技术问题。
发明内容
本公开提供了一种背景图像生成方法、背景图像生成装置、计算机可读存储介质与电子设备。
根据本公开的第一方面,提供一种背景图像生成方法,包括:获取第一图像;在预设图像库中查找与所述第一图像匹配的第二图像;根据所述第一图像对所述第二图像进行调色处理,生成背景图像。
根据本公开的第二方面,提供一种背景图像生成装置,包括处理器与存储器,所述处理器用于执行所述存储器中存储的以下程序模块:图像获取模块,被配置为获取第一图像;图像匹配模块,被配置为在预设图像库中查找与所述第一图像匹配的第二图像;调色处理模块,被配置为根据所述第一图像对所述第二图像进行调色处理,生成背景图像。
根据本公开的第三方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面的背景图像生成方法及其可能的实现方式。
根据本公开的第四方面,提供一种电子设备,包括:处理器;以及存储器,用于存储所述处理器的可执行指令;其中,所述处理器配置为经由执行所述可执行指令来执行上述第一方面的背景图像生成方法及其可能的实现方式。
附图说明
图1示出本示例性实施方式中一种系统架构的示意图;
图2示出本示例性实施方式中一种电子设备的结构图;
图3示出本示例性实施方式中一种背景图像生成方法的流程图;
图4示出本示例性实施方式中生成第一图像的流程图;
图5示出本示例性实施方式中截取第一图像的流程图;
图6示出本示例性实施方式中生成第一图像的示意图;
图7示出本示例性实施方式中查找第二图像的流程图;
图8示出本示例性实施方式中进行图像识别处理的示意图;
图9示出本示例性实施方式中一种背景图像生成装置的结构框图;
图10示出本示例性实施方式中另一种背景图像生成装置的结构框图。
具体实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。在下面的描述中,提供许多具体细节从而给出对本公开的实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而省略特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知技术方案以避免喧宾夺主而使得本公开的各方面变得模糊。
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
手机的背景图像一般包括锁屏背景与桌面背景,是用户日常使用手机时看到最多的内容之一;在智能穿戴设备上,也出现了各种背景图像,如智能手表的表盘图像,其甚至可以保持常亮状态,使智能手表成为一种装饰。由此可见,背景图像对于用户在美观方面的感受有很大影响。
相关技术中,背景图像通常来源于两种途径:其一是系统自带的背景样式,一般有纯色背景、渐变色背景、简单图案动态变化的背景;其二是用户自选的图片,如用户在相册中选取一张图片,将其设置为背景。然而,这两种来源的背景图像具有一定的局限性:系统自带的背景样式种类非常有限;而用户相册中的图片大多是日常生活中拍摄的,难免存在拍摄缺陷,不适合做背景。
鉴于上述问题,本公开的示例性实施方式提供一种背景图像生成方法,所生成的背景图像可应用于:手机或平板电脑的锁屏背景、桌面背景,智能手表的表盘图像,设备屏幕保护所用的背景等等。
图1示出了本公开示例性实施方式的一种系统架构的示意图。如图1所示,该系统架 构100可以包括:终端设备110和服务器120。终端设备110可以是手机、平板电脑、数码相机、个人电脑、智能穿戴设备等。本公开实施方式所提供的方法可以由终端设备110单独执行,例如在终端设备110拍摄图像后,通过执行背景图像生成方法,生成背景图像;本公开实施方式所提供的方法也可以由服务器120执行,例如终端设备110拍摄图像后,传输至服务器120,服务器120通过执行背景图像生成方法,生成背景图像,并将该背景图像返回至终端设备110进行显示。本公开对此不做限定。
本公开的示例性实施方式还提供一种电子设备,用于实现上述背景图像生成方法。该电子设备可以是图1中的终端设备110或服务器120。该电子设备至少包括处理器和存储器,存储器用于存储处理器的可执行指令,也可以存储应用数据,如图像数据等;处理器配置为经由执行可执行指令来执行背景图像生成方法。
下面以图2中的移动终端200为例,对上述电子设备的构造进行示例性说明。本领域技术人员应当理解,除了特别用于移动目的的部件之外,图2中的构造也能够应用于固定类型的设备。
如图2所示,移动终端200具体可以包括:处理器210、内部存储器221、外部存储器接口222、USB(Universal Serial Bus,通用串行总线)接口230、充电管理模块240、电源管理模块241、电池242、天线1、天线2、移动通信模块250、无线通信模块260、音频模块270、扬声器271、受话器272、麦克风273、耳机接口274、传感器模块280、显示屏290、摄像模组291、指示器292、马达293、按键294以及用户标识模块(Subscriber Identification Module,SIM)卡接口295等。
处理器210可以包括一个或多个处理单元,例如:处理器210可以包括应用处理器(Application Processor,AP)、调制解调处理器、图形处理器(Graphics Processing Unit,GPU)、图像信号处理器(Image Signal Processor,ISP)、控制器、编码器、解码器、数字信号处理器(Digital Signal Processor,DSP)、基带处理器和/或神经网络处理器(Neural-Network Processing Unit,NPU)等。其中,AP、GPU等可以执行对图像数据的处理,例如对图像进行目标检测、识别处理等。
编码器可以对图像或视频数据进行编码(即压缩),例如移动终端200对拍摄的图像进行压缩,将压缩后的数据传输至服务器,以减少数据传输所占的带宽;解码器可以对经过编码的图像或视频数据进行解码(即解压缩),以还原出图像或视频。移动终端200可以支持一种或多种编码器和解码器。这样,移动终端200可以处理多种编码格式的图像或视频,例如:JPEG(Joint Photographic Experts Group,联合图像专家组)、PNG(Portable Network Graphics,便携式网络图形)、BMP(Bitmap,位图)等图像格式,MPEG(Moving Picture Experts Group,动态图像专家组)1、MPEG2、H.263、H.264、HEVC(High Efficiency Video Coding,高效率视频编码)等视频格式。
在一些实施方式中,处理器210可以包括一个或多个接口,通过不同的接口和移动终端200的其他部件形成连接。
外部存储器接口222可以用于连接外部存储卡。内部存储器221可以用于存储计算机可执行程序代码,还可以存储移动终端200使用过程中所创建的数据(比如图像,视频)等。
USB接口230是符合USB标准规范的接口,可以用于连接充电器为移动终端200充电,也可以连接耳机或其他电子设备。
充电管理模块240用于从充电器接收充电输入。充电管理模块240为电池242充电的同时,还可以通过电源管理模块241为设备供电;电源管理模块241还可以监测电池的状态。
移动终端200的无线通信功能可以通过天线1、天线2、移动通信模块250、无线通信模块260、调制解调处理器以及基带处理器等实现。天线1和天线2用于发射和接收电磁波信号。移动通信模块250可以提供应用在移动终端200上的包括2G/3G/4G/5G等无线通信的解决方案。无线通信模块260可以提供应用在移动终端200上的包括无线局域网(Wireless Local Area Networks,WLAN)(如无线保真(Wireless Fidelity,Wi-Fi)网络)、蓝牙(Bluetooth,BT)、全球导航卫星系统(Global Navigation Satellite System,GNSS)、调频(Frequency Modulation,FM)、近距离无线通信技术(Near Field Communication,NFC)、红外技术(Infrared,IR)等无线通信解决方案。
移动终端200可以通过GPU、显示屏290及应用处理器等实现显示功能,可以通过ISP、摄像模组291、编码器、解码器、GPU、显示屏290及应用处理器等实现拍摄功能,还可以通过音频模块270、扬声器271、受话器272、麦克风273、耳机接口274及应用处理器等实现音频功能。
传感器模块280可以包括深度传感器2801、压力传感器2802、陀螺仪传感器2803、气压传感器2804等。
指示器292可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。马达293可以产生振动提示,也可以用于触摸振动反馈等。按键294包括开机键,音量键等。
移动终端200可以支持一个或多个SIM卡接口295,用于连接SIM卡。
图3示出了一种背景图像生成方法的示例性流程,可以包括以下步骤S310至S330:
步骤S310,获取第一图像;
步骤S320,在预设图像库中查找与第一图像匹配的第二图像;
步骤S330,根据第一图像对第二图像进行调色处理,生成背景图像。
通过上述方法,以第一图像为基准,从预设图像库中查找与之匹配的第二图像,再根据第一图像对第二图像进行调色处理,将调色后的第二图像作为背景图像。一方面,本示例性实施方式提出了一种新的背景图像生成方法,结合了第一图像与第二图像的特征,第一图像包含了用户对图像内容、图像色彩方面的喜好,第二图像是适合作为背景的标准化图像,无图像拍摄中的抖动、噪点等缺陷。由此生成的背景图像满足背景内容纯净、主题 鲜明等特点,并且能够满足用户的个性化需求,同时带来了更换背景的趣味性。另一方面,本方案实现过程简单,无需用户进行大量操作,也无需进行复杂的图像处理,降低了获取背景图像的时间与硬件成本。
下面对图3中的每个步骤做具体说明。
步骤S310中,获取第一图像。
其中,第一图像是指用于生成背景图像的基准图像,通常是用户主观指定的图像,反映了用户对图像内容、图像色彩方面的喜好。第一图像包括但不限于:用户拍摄的图像;用户在终端设备中选取的图像,如在网络上下载的图像、相册中的图像等;对用户拍摄或选取的图像进行一定处理后得到的图像。第一图像可以是任意类型的图像,例如人物图像、风景图像、建筑图像等。
在一种可选的实施方式中,参考图4所示,步骤S310可以包括以下步骤S410和S420:
步骤S410,从原始图像中确定至少一个局部区域图像;
步骤S420,根据上述局部区域图像生成第一图像。
原始图像是用于产生第一图像的最初图像,可以是用户拍摄或选取的图像,也可以是系统自动指定的图像,例如:响应于用户的预设操作,获取原始图像。该预设操作可以包括:控制摄像头采集原始图像,如用户打开拍照界面,将摄像头采集的预览图像作为原始图像,或者拍摄一张图像作为原始图像;在本地图像库或应用程序中选择原始图像,本地图像库是不同于上述预设图像库的另一图像库,如可以是手机上的相册,具体的,用户可以在手机本地的相册中选择一张图像作为原始图像,或者在应用程序中选择一张图像(其他用户发送的图像,程序页面中的图像等)作为原始图像,等等。由此,保证原始图像符合用户的喜好。
用户事先设定背景图像的主题切换规则,例如根据时间、天气等进行切换。由此,系统可以根据当前的时间、天气等获取一张匹配的原始图像,如当前是2020年8月20日,可以将2019年8月20日拍摄的图像作为原始图像;当前是下雨天,可以在互联网上搜索下雨主题的图像或者将相册中与下雨相关的图像作为原始图像,等等。由此,可以实现原始图像的自动选择,简化了用户操作。
图4的流程实际是对原始图像进行截图,根据所截取的局部区域图像生成第一图像。由于原始图像中通常包含较多的图像内容,通过从原始图像中截取局部区域图像,可以得到内容较为纯净、主题单一的图像。
在一种可选的实施方式中,可以响应于用户在原始图像中进行的区域框选操作,确定至少一个局部区域图像。区域框选操作是指用户在原始图像中框选(一般为矩形框)出局部区域,即允许用户在原始图像中进行手动截图,且支持用户同时截取多个局部区域图像,这样用户可以截取自己喜欢的局部区域图像,作为后续生成背景图像的基准。
在一种可选的实施方式中,也可以对原始图像进行内容识别,分割出局部区域图像。例如从原始图像中识别出前景部分,截取前景部分所在的局部区域图像。
在得到局部区域图像后,可以将其直接作为第一图像,也可以做进一步的处理,处理后得到第一图像。
在一种可选的实施方式中,参考图5所示,步骤S420可以包括以下步骤S510和S520:
步骤S510,对局部区域图像进行目标检测,得到局部区域图像中各目标的包围框;
步骤S520,截取包围框的图像,作为第一图像。
目标可以是图像中任意可识别的实物,如植物、建筑、车辆等。一般的,用户手动截取局部区域图像时,可能存在误差,使局部区域图像中包含目标以外的无关元素。通过图5的方法处理局部区域图像后,可以得到关于目标的纯净图像。目标检测可以通过YOLO(You Only Look Once,一种实时目标检测的算法,包括v1、v2、v3等版本,本公开可以采用任意版本)、SSD(Single Shot MultiBox Detector,单步多框目标检测)等算法模型实现,在局部区域图像中输出目标的包围框(Bounding Box,或称包围盒)。截取包围框内的图像,得到第一图像,相当于在局部区域图像的基础上进行更加精细的截图,得到以目标为主要内容的第一图像,从而提高第一图像的准确度,降低无关内容对后续处理所造成的干扰。
需要说明的是,当局部区域图像中包含多个目标时,可以分别输出每个目标的包围框。在此基础上,可以对每个包围框分别截图,这样得到多张第一图像;也可以对这些包围框进行筛选,例如:仅截取面积最大的包围框的图像,作为第一图像,即仅生成一张第一图像;保留面积大于一定阈值的包围框,截取其图像,作为第一图像;使用户在所有包围框中手动选择一个或多个包围框,截取其图像,作为第一图像,等等。本公开对此不做限定。
在一种可选的实施方式中,步骤S510可以通过以下方式实现:
对局部区域图像进行目标检测,生成局部区域图像中各目标的候选框;
将同一目标的候选框进行合并,得到局部区域图像中各目标的包围框。
在检测图像中的目标时,可以采用NMS(Non-Maximum Suppression,非极大值抑制)等算法,使检测框遍历整个图像,选取图像中最优或局部最优的检测框,即候选框。对于同一目标来说,可能得到不止一个候选框,这通常是由于对于目标边缘识别的误差所导致,特别是对于边缘特征不突出的目标,其候选框数量一般较多。
基于此,可以将同一目标的候选框进行合并,得到最终的包围框,由此可以提高包围框以及第一图像的准确度。本公开对于候选框的具体合并方式不做限定。举例来说,可以采用最大合并,即:获取同一目标的候选框的各角点在局部区域图像中的坐标;在各角点的坐标中,选取最小的横坐标与最小的纵坐标,形成左下角点,选取最大的横坐标与最大的纵坐标,形成右上角点;根据左下角点与右上角点在局部区域图像中确定矩形的包围框,通常以左下角点和右上角点的连线为对角线,确定矩形区域,即上述目标的包围框。也可以采用平均合并,即对各候选框的左下角点取平均值,对各候选框的右上角点取平均值,以两个平均值点连线为对角线,确定矩形区域,即目标的包围框。
图6示出了以MobileNet-SSD网络(针对于移动端的轻量化SSD网络模型)为例进行 目标检测的过程,包括:
将局部区域图像输入至MobileNet-SSD网络中,生成多个大小不同的候选框;
通过网络对图像提取特征,具体来说,对每个候选框图像提取特征并进行识别,排除背景部分的候选框,并对包含目标的候选框进行位置与大小优化;
确定包含目标的候选框,通常一个目标可能对应得到多个候选框;
合并同一目标的候选框,得到每个目标对应的包围框;
截取包围框的图像,得到第一图像。
继续参考图3,步骤S320中,在预设图像库中查找与第一图像匹配的第二图像。
其中,预设图像库是预先建立的标准化的图像库。这里的标准化主要指其中的图像内容纯净、主题单一、风格鲜明,不存在日常拍摄的图像中的抖动、噪音等缺陷,适合于作为背景图像。本示例性实施方式中,可以建立不同风格的预设图像库,如插画风格的图像库、动画风格的图像库、真实拍摄的风景图像库等。在一种可选的实施方式中,预设图像库可以包括矢量图像库,如由SVG(Scalable Vector Graphics,可缩放矢量图形)格式的图像所形成的图像库,矢量图形可以呈现出平面插画的风格,特别适合于形成扁平化的背景图像。
由于第一图像中可能存在各种缺陷,如噪点、内容不纯净、图像不清晰等,在预设图像库中查找与之匹配的第二图像,相当于将第一图像替换为“无缺陷”的标准化图像。
在一种可选的实施方式中,可以将第一图像与预设图像库中的图像计算相似度,将相似度最高的图像确定为与第一图像匹配的第二图像。例如,计算第一图像与预设图像库中的图像的SSIM(Structural SIMilarity,结构相似度);或者分别对第一图像与预设图像库中的图像提取特征,然后计算两个特征向量的余弦相似度或欧式距离等。
在一种可选的实施方式中,参考图7所示,步骤S320可以包括以下步骤S710和S720:
步骤S710,对第一图像进行识别处理,得到第一图像的标签;
步骤S720,在预设图像库中查找第一图像的标签所对应的至少一张图像,作为第二图像。
图像的识别处理可以通过图像分类算法实现,如MobileNet分类器等,所得到的标签即对第一图像进行分类的类别标签。由此,在预设图像库中筛选出具有相同标签的图像,在其中查找第二图像(如可以根据图像相似度进行查找),即第二图像应当与第一图像属于同一类别。这样可以缩小在预设图像库中查找第二图像的范围,并提高查找结果的准确度。
在一种可选的实施方式中,为了实现更加精准的查找,可以对图像设置多级标签,从粗分类到细分类。例如,第一图像可以包括一级标签和二级标签。在进行识别处理时,可以对第一图像进行第一级识别处理,得到第一图像的一级标签;然后在该一级标签下,对第一图像进行第二级识别处理,得到第一图像的二级标签。
[根据细则26改正10.09.2021] 
以图8为例,将局部区域图像输入至基干轻量级网络的单步多框目标检测模型(MobileNet-SSD)中,确定目标的包围框,并进行 截图,得到两个第一图像;将第一图像分别输入至基干轻量级网络的分类器(MobileNet-Classfication),首先进行第一级识别处理,得到一级标签,如“容器”和“植物”,然后分别在一级标签下进行第二级识别处理,得到二级标签,如“花盆”和“绿萝”。需要说明的是,在进行第二级识别处理时,一级标签相当于先验信息,帮助提高第二级识别处理的准确度。或者第二级识别处理时采用不同的子网络,每个子网络对应于一种一级标签,当一级标签不同时,所用的子网络也不同,由此可以实现更加针对性的识别处理。
基于上述多级标签的划分与识别,得到第一图像的最后一级标签,如二级标签。在查找第二图像时,可以在预设图像库中查找第一图像的最后一级标签所对应的至少一张图像。以图8中的“绿萝”图像为例,在识别得到其二级标签“绿萝”后,可以在预设图像库中筛选出标签为“绿萝”的图像,从中进一步获取第二图像。
继续参考图3,步骤S330中,根据第一图像对第二图像进行调色处理,生成背景图像。
简而言之,将第一图像的色调应用到第二图像中。例如,第一图像中绿萝的绿色与第二图像中的绿叶颜色不同,可以以第一图像中的绿色替换第二图像中的绿色,更加符合用户的喜好。
在一种可选的实施方式中,步骤S330可以包括:
根据第一图像的主色的数量,在第二图像中确定相同数量的图层,作为第二图像中的目标图层;
利用第一图像的主色,替换第二图像中的目标图层的颜色。
其中,第一图像的主色可以是第一图像中占比(指像素比例)最高的一种或多种颜色。例如:选取占比最高的一种颜色作为主色;或者选取占比最高的三种颜色作为主色;或者选取占比超过20%的多种颜色作为主色,等等,本公开对此不做限定。
第二图像中的目标图层,可以是第二图像中的主要图层,例如前景部分的图层。在矢量图形中,一般由多个图层叠加形成图像,不同图层之间按照由前景到背景的顺序排列,可以选取排列靠前的一个或多个图层,作为目标图层。
在确定第一图像的主色和第二图像的目标图层后,可以将主色填充至目标图层中,以替换目标图层原本的颜色。
在一种可选的实施方式中,调色处理的过程可以包括:
根据第一图像的各主色在第一图像中所占的比例,对各主色排序;
确定目标图层在第二图像中的排序;
按照上述各主色的排序与上述目标图层的排序确定上述主色与目标图层的对应关系,分别利用每个主色替换对应的目标图层的颜色。
举例来说,在第一图像中提取3个主色,按主色所占的比例从大到小的顺序,记为主色1、主色2、主色3;在第二图像中,选取排列最靠前的3个图层作为目标图层,按顺序分别记为图层1、图层2、图层3;然后将图层1中的颜色全部替换为主色1,图层2中的颜色全部替换为主色2,图层3中的颜色全部替换为主色3,由此得到与第一图像的色彩 风格非常相似的图像,作为最终的背景图像,相当于结合了第一图像的色彩与第二图像的内容,更加符合用户的喜好。
本公开的示例性实施方式还提供一种背景图像生成装置。参考图9所示,该背景图像生成装置900可以包括处理器910与存储器920,存储器920中存储有以下程序模块:
图像获取模块921,用于获取第一图像;
图像匹配模块922,用于在预设图像库中查找与第一图像匹配的第二图像;
调色处理模块923,用于根据第一图像对第二图像进行调色处理,生成背景图像;
处理器910用于执行以上各程序模块。
在一种可选的实施方式中,图像获取模块921,被配置为:
从原始图像中确定至少一个局部区域图像;
根据上述局部区域图像生成第一图像。
在一种可选的实施方式中,图像获取模块921,还被配置为:
响应于用户的预设操作,获取原始图像。
在一种可选的实施方式中,上述预设操作包括:控制摄像头采集原始图像,或者在本地图像库或应用程序中选择原始图像。
在一种可选的实施方式中,上述从原始图像中确定至少一个局部区域图像,包括:
响应于用户在原始图像中进行的区域框选操作,确定至少一个局部区域图像。
在一种可选的实施方式中,上述根据局部区域图像生成第一图像,包括:
对局部区域图像进行目标检测,得到局部区域图像中各目标的包围框;
截取包围框的图像,作为第一图像。
在一种可选的实施方式中,上述对局部区域图像进行目标检测,得到局部区域图像中各目标的包围框,包括:
对局部区域图像进行目标检测,生成局部区域图像中各目标的候选框;
将同一目标的候选框进行合并,得到局部区域图像中各目标的包围框。
在一种可选的实施方式中,上述将同一目标的候选框进行合并,得到局部区域图像中各目标的包围框,包括:
获取同一目标的候选框的各角点在局部区域图像中的坐标;
在各角点的坐标中选取最小的横坐标与最小的纵坐标,形成左下角点,选取最大的横坐标与最大的纵坐标,形成右上角点;
根据上述左下角点与右上角点在局部区域图像中确定矩形的包围框。
在一种可选的实施方式中,上述截取包围框的图像,作为第一图像,包括:
截取面积最大的包围框的图像,作为第一图像。
在一种可选的实施方式中,图像匹配模块922,被配置为:
将第一图像与预设图像库中的图像计算相似度,选取其中相似度最高的图像作为与第一图像匹配的第二图像。
在一种可选的实施方式中,图像匹配模块922,被配置为:
对第一图像进行识别处理,得到第一图像的标签;
在预设图像库中查找第一图像的标签所对应的至少一张图像,作为第二图像。
在一种可选的实施方式中,第一图像的标签包括一级标签和二级标签。上述对第一图像进行识别处理,得到第一图像的标签,包括:
对第一图像进行第一级识别处理,得到第一图像的一级标签;
在一级标签下,对第一图像进行第二级识别处理,得到第一图像的二级标签。
在一种可选的实施方式中,上述在预设图像库中查找第一图像的标签所对应的至少一张图像,包括:
在预设图像库中查找第一图像的二级标签所对应的至少一张图像。
在一种可选的实施方式中,调色处理模块923,被配置为:
根据第一图像的主色的数量,在第二图像中确定相同数量的图层,作为第二图像中的目标图层;
利用第一图像的主色,替换第二图像中的目标图层的颜色。
在一种可选的实施方式中,上述利用第一图像的主色,替换第二图像中的目标图层的颜色,包括:
根据第一图像的各主色在第一图像中所占的比例,对各主色排序;
确定目标图层在第二图像中的排序;
按照上述各主色的排序与上述目标图层的排序确定上述主色与目标图层的对应关系,分别利用每个主色替换对应的目标图层的颜色。
在一种可选的实施方式中,第一图像的主色包括第一图像中占比最高的一种或多种颜色。
在一种可选的实施方式中,预设图像库包括矢量图像库。
本公开的示例性实施方式还提供另一种背景图像生成装置。参考图10所示,该背景图像生成装置1000可以包括:
图像获取模块1010,被配置为获取第一图像;
图像匹配模块1020,被配置为在预设图像库中查找与第一图像匹配的第二图像;
调色处理模块1030,被配置为根据第一图像对第二图像进行调色处理,生成背景图像。
在一种可选的实施方式中,图像获取模块1010,被配置为:
从原始图像中确定至少一个局部区域图像;
根据上述局部区域图像生成第一图像。
在一种可选的实施方式中,图像获取模块1010,还被配置为:
响应于用户的预设操作,获取原始图像。
在一种可选的实施方式中,上述预设操作包括:控制摄像头采集原始图像,或者在本地图像库或应用程序中选择原始图像。
在一种可选的实施方式中,上述从原始图像中确定至少一个局部区域图像,包括:
响应于用户在原始图像中进行的区域框选操作,确定至少一个局部区域图像。
在一种可选的实施方式中,上述根据局部区域图像生成第一图像,包括:
对局部区域图像进行目标检测,得到局部区域图像中各目标的包围框;
截取包围框的图像,作为第一图像。
在一种可选的实施方式中,上述对局部区域图像进行目标检测,得到局部区域图像中各目标的包围框,包括:
对局部区域图像进行目标检测,生成局部区域图像中各目标的候选框;
将同一目标的候选框进行合并,得到局部区域图像中各目标的包围框。
在一种可选的实施方式中,上述将同一目标的候选框进行合并,得到局部区域图像中各目标的包围框,包括:
获取同一目标的候选框的各角点在局部区域图像中的坐标;
在各角点的坐标中选取最小的横坐标与最小的纵坐标,形成左下角点,选取最大的横坐标与最大的纵坐标,形成右上角点;
根据上述左下角点与右上角点在局部区域图像中确定矩形的包围框。
在一种可选的实施方式中,上述截取包围框的图像,作为第一图像,包括:
截取面积最大的包围框的图像,作为第一图像。
在一种可选的实施方式中,图像匹配模块1020,被配置为:
将第一图像与预设图像库中的图像计算相似度,选取其中相似度最高的图像作为与第一图像匹配的第二图像。
在一种可选的实施方式中,图像匹配模块1020,被配置为:
对第一图像进行识别处理,得到第一图像的标签;
在预设图像库中查找第一图像的标签所对应的至少一张图像,作为第二图像。
在一种可选的实施方式中,第一图像的标签包括一级标签和二级标签。上述对第一图像进行识别处理,得到第一图像的标签,包括:
对第一图像进行第一级识别处理,得到第一图像的一级标签;
在一级标签下,对第一图像进行第二级识别处理,得到第一图像的二级标签。
在一种可选的实施方式中,上述在预设图像库中查找第一图像的标签所对应的至少一张图像,包括:
在预设图像库中查找第一图像的二级标签所对应的至少一张图像。
在一种可选的实施方式中,调色处理模块1030,被配置为:
根据第一图像的主色的数量,在第二图像中确定相同数量的图层,作为第二图像中的目标图层;
利用第一图像的主色,替换第二图像中的目标图层的颜色。
在一种可选的实施方式中,上述利用第一图像的主色,替换第二图像中的目标图层的 颜色,包括:
根据第一图像的各主色在第一图像中所占的比例,对各主色排序;
确定目标图层在第二图像中的排序;
按照上述各主色的排序与上述目标图层的排序确定上述主色与目标图层的对应关系,分别利用每个主色替换对应的目标图层的颜色。
在一种可选的实施方式中,第一图像的主色包括第一图像中占比最高的一种或多种颜色。
在一种可选的实施方式中,预设图像库包括矢量图像库。
上述装置中各部分的具体细节在方法部分实施方式中已经详细说明,因而不再赘述。
本公开的示例性实施方式还提供了一种计算机可读存储介质,可以实现为程序产品的形式,包括程序代码,当程序产品在终端设备上运行时,程序代码用于使终端设备执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施方式的步骤,例如可以执行图3、图4、图5或图7中任意一个或多个步骤。该程序产品可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本公开的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言的任意组合来编写用于执行本公开操作的程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网 (WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。所属技术领域的技术人员能够理解,本公开的各个方面可以实现为系统、方法或程序产品。因此,本公开的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其他实施方式。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施方式仅被视为示例性的,本公开的真正范围和精神由权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限定。

Claims (20)

  1. 一种背景图像生成方法,其特征在于,包括:
    获取第一图像;
    在预设图像库中查找与所述第一图像匹配的第二图像;
    根据所述第一图像对所述第二图像进行调色处理,生成背景图像。
  2. 根据权利要求1所述的方法,其特征在于,所述获取第一图像,包括:
    从原始图像中确定至少一个局部区域图像;
    根据所述局部区域图像生成所述第一图像。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    响应于用户的预设操作,获取所述原始图像。
  4. 根据权利要求3所述的方法,其特征在于,所述预设操作包括:控制摄像头采集所述原始图像,或者在本地图像库或应用程序中选择所述原始图像。
  5. 根据权利要求2所述的方法,其特征在于,所述从原始图像中确定至少一个局部区域图像,包括:
    响应于用户在原始图像中进行的区域框选操作,确定至少一个局部区域图像。
  6. 根据权利要求2所述的方法,其特征在于,所述根据所述局部区域图像生成所述第一图像,包括:
    对所述局部区域图像进行目标检测,得到所述局部区域图像中各目标的包围框;
    截取所述包围框的图像,作为所述第一图像。
  7. 根据权利要求6所述的方法,其特征在于,所述对所述局部区域图像进行目标检测,得到所述局部区域图像中各目标的包围框,包括:
    对所述局部区域图像进行目标检测,生成所述局部区域图像中各目标的候选框;
    将同一目标的候选框进行合并,得到所述局部区域图像中各目标的包围框。
  8. 根据权利要求7所述的方法,其特征在于,所述将同一目标的候选框进行合并,得到所述局部区域图像中各目标的包围框,包括:
    获取同一目标的候选框的各角点在所述局部区域图像中的坐标;
    在所述各角点的坐标中选取最小的横坐标与最小的纵坐标,形成左下角点,选取最大的横坐标与最大的纵坐标,形成右上角点;
    根据所述左下角点与所述右上角点在所述局部区域图像中确定矩形的包围框。
  9. 根据权利要求6所述的方法,其特征在于,所述截取所述包围框的图像,作为所述第一图像,包括:
    截取面积最大的包围框的图像,作为所述第一图像。
  10. 根据权利要求1所述的方法,其特征在于,所述在预设图像库中查找与所述第一图像匹配的第二图像,包括:
    将所述第一图像与预设图像库中的图像计算相似度,选取其中相似度最高的图像作为 与所述第一图像匹配的第二图像。
  11. 根据权利要求1所述的方法,其特征在于,所述在预设图像库中查找与所述第一图像匹配的第二图像,包括:
    对所述第一图像进行识别处理,得到所述第一图像的标签;
    在预设图像库中查找所述第一图像的标签所对应的至少一张图像,作为所述第二图像。
  12. 根据权利要求11所述的方法,其特征在于,所述第一图像的标签包括一级标签和二级标签;
    所述对所述第一图像进行识别处理,得到所述第一图像的标签,包括:
    对所述第一图像进行第一级识别处理,得到所述第一图像的一级标签;
    在所述一级标签下,对所述第一图像进行第二级识别处理,得到所述第一图像的二级标签。
  13. 根据权利要求12所述的方法,其特征在于,所述在预设图像库中查找所述第一图像的标签所对应的至少一张图像,包括:
    在预设图像库中查找所述第一图像的二级标签所对应的至少一张图像。
  14. 根据权利要求1所述的方法,其特征在于,所述根据所述第一图像对所述第二图像进行调色处理,包括:
    根据所述第一图像的主色的数量,在所述第二图像中确定相同数量的图层,作为所述第二图像中的目标图层;
    利用所述第一图像的主色,替换所述第二图像中的目标图层的颜色。
  15. 根据权利要求14所述的方法,其特征在于,所述利用所述第一图像的主色,替换所述第二图像中的目标图层的颜色,包括:
    根据所述第一图像的各主色在所述第一图像中所占的比例,对所述各主色排序;
    确定所述目标图层在所述第二图像中的排序;
    按照所述各主色的排序与所述目标图层的排序确定所述主色与所述目标图层的对应关系,分别利用每个主色替换对应的所述目标图层的颜色。
  16. 根据权利要求14所述的方法,其特征在于,所述第一图像的主色包括所述第一图像中占比最高的一种或多种颜色。
  17. 根据权利要求1至16任一项所述的方法,其特征在于,所述预设图像库包括矢量图像库。
  18. 一种背景图像生成装置,其特征在于,包括处理器与存储器,所述处理器用于执行所述存储器中存储的以下程序模块:
    图像获取模块,被配置为获取第一图像;
    图像匹配模块,被配置为在预设图像库中查找与所述第一图像匹配的第二图像;
    调色处理模块,被配置为根据所述第一图像对所述第二图像进行调色处理,生成背景图像。
  19. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至17任一项所述的方法。
  20. 一种电子设备,其特征在于,包括:
    处理器;以及
    存储器,用于存储所述处理器的可执行指令;
    其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1至17任一项所述的方法。
PCT/CN2021/110248 2020-09-14 2021-08-03 背景图像生成方法、装置、存储介质与电子设备 WO2022052669A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21865725.2A EP4213098A4 (en) 2020-09-14 2021-08-03 BACKGROUND IMAGE GENERATION METHOD AND APPARATUS, STORAGE MEDIUM AND ELECTRONIC DEVICE
US18/148,429 US20230222706A1 (en) 2020-09-14 2022-12-29 Background image generation method and apparatus, storage medium, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010963262.3A CN114187371A (zh) 2020-09-14 2020-09-14 背景图像生成方法、装置、存储介质与电子设备
CN202010963262.3 2020-09-14

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/148,429 Continuation US20230222706A1 (en) 2020-09-14 2022-12-29 Background image generation method and apparatus, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
WO2022052669A1 true WO2022052669A1 (zh) 2022-03-17

Family

ID=80539309

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/110248 WO2022052669A1 (zh) 2020-09-14 2021-08-03 背景图像生成方法、装置、存储介质与电子设备

Country Status (4)

Country Link
US (1) US20230222706A1 (zh)
EP (1) EP4213098A4 (zh)
CN (1) CN114187371A (zh)
WO (1) WO2022052669A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110242126A1 (en) * 2010-04-05 2011-10-06 Microsoft Corporation Capturing image structure detail from a first image and color from a second image
CN104732506A (zh) * 2015-03-27 2015-06-24 浙江大学 一种基于人脸语义分析的人物照片颜色风格转换方法
CN109710791A (zh) * 2018-12-14 2019-05-03 中南大学 一种基于显著滤波器的多源彩色图像颜色迁移方法
CN109754375A (zh) * 2018-12-25 2019-05-14 广州华多网络科技有限公司 图像处理方法、系统、计算机设备、存储介质和终端
CN110097604A (zh) * 2019-05-09 2019-08-06 杭州筑象数字科技有限公司 图像颜色风格转移方法
US20200106925A1 (en) * 2018-09-28 2020-04-02 Brother Kogyo Kabushiki Kaisha Image processing apparatus identifying pixel which satisfies specific condition and performing replacement process on pixel value of identified pixel

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007080834A1 (ja) * 2006-01-10 2007-07-19 Matsushita Electric Industrial Co., Ltd. 色補正処理装置および色補正処理方法、ならびに、動的なカメラ色補正装置およびそれを用いた映像検索装置
US10074161B2 (en) * 2016-04-08 2018-09-11 Adobe Systems Incorporated Sky editing based on image composition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110242126A1 (en) * 2010-04-05 2011-10-06 Microsoft Corporation Capturing image structure detail from a first image and color from a second image
CN104732506A (zh) * 2015-03-27 2015-06-24 浙江大学 一种基于人脸语义分析的人物照片颜色风格转换方法
US20200106925A1 (en) * 2018-09-28 2020-04-02 Brother Kogyo Kabushiki Kaisha Image processing apparatus identifying pixel which satisfies specific condition and performing replacement process on pixel value of identified pixel
CN109710791A (zh) * 2018-12-14 2019-05-03 中南大学 一种基于显著滤波器的多源彩色图像颜色迁移方法
CN109754375A (zh) * 2018-12-25 2019-05-14 广州华多网络科技有限公司 图像处理方法、系统、计算机设备、存储介质和终端
CN110097604A (zh) * 2019-05-09 2019-08-06 杭州筑象数字科技有限公司 图像颜色风格转移方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of EP4213098A4 *
XINJUN SU, XINGWEI HE: "Color Transfer Based on Texture Similarity", PACKAGING JOURNAL, vol. 8, no. 1, 31 January 2016 (2016-01-31), XP055911926, ISSN: 1674-7100, DOI: 10.3969/j.issn.1674 *

Also Published As

Publication number Publication date
EP4213098A1 (en) 2023-07-19
CN114187371A (zh) 2022-03-15
US20230222706A1 (en) 2023-07-13
EP4213098A4 (en) 2023-12-06

Similar Documents

Publication Publication Date Title
US20210218891A1 (en) Apparatus and Methods for Image Encoding Using Spatially Weighted Encoding Quality Parameters
US20210160556A1 (en) Method for enhancing resolution of streaming file
CN111179282B (zh) 图像处理方法、图像处理装置、存储介质与电子设备
CN111580765A (zh) 投屏方法、投屏装置、存储介质、被投屏设备与投屏设备
CN112270710B (zh) 位姿确定方法、位姿确定装置、存储介质与电子设备
US9860537B2 (en) Multi-focus image data compression
CN111784614A (zh) 图像去噪方法及装置、存储介质和电子设备
CN109788189A (zh) 将相机与陀螺仪融合在一起的五维视频稳定化装置及方法
WO2022206202A1 (zh) 图像美颜处理方法、装置、存储介质与电子设备
JP2007049332A (ja) 記録再生装置および記録再生方法、並びに、記録装置および記録方法
CN112214636A (zh) 音频文件的推荐方法、装置、电子设备以及可读存储介质
CN105681894A (zh) 显示视频文件的装置及方法
WO2021179804A1 (zh) 图像处理方法、图像处理装置、存储介质与电子设备
CN109168032B (zh) 视频数据的处理方法、终端、服务器及存储介质
CN104918027A (zh) 用于生成数字处理图片的方法、电子装置和服务器
CN112381828A (zh) 基于语义和深度信息的定位方法、装置、介质与设备
CN114979785B (zh) 视频处理方法、电子设备及存储介质
JP2018535572A (ja) カメラプレビュー
WO2022193911A1 (zh) 指令信息获取方法及装置、可读存储介质、电子设备
KR102426089B1 (ko) 전자 장치 및 전자 장치의 요약 영상 생성 방법
CN114170554A (zh) 视频检测方法、视频检测装置、存储介质与电子设备
WO2022052669A1 (zh) 背景图像生成方法、装置、存储介质与电子设备
CN113875227A (zh) 信息处理设备、信息处理方法和程序
CN113781336B (zh) 图像处理的方法、装置、电子设备与存储介质
WO2021129444A1 (zh) 文件聚类方法及装置、存储介质和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21865725

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2021865725

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021865725

Country of ref document: EP

Effective date: 20230414