CN112541934A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN112541934A
CN112541934A CN201910894294.XA CN201910894294A CN112541934A CN 112541934 A CN112541934 A CN 112541934A CN 201910894294 A CN201910894294 A CN 201910894294A CN 112541934 A CN112541934 A CN 112541934A
Authority
CN
China
Prior art keywords
picture
cutting
area
target
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910894294.XA
Other languages
Chinese (zh)
Other versions
CN112541934B (en
Inventor
张人愉
李小康
卢乾坤
唐楠
阴凉
刘林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910894294.XA priority Critical patent/CN112541934B/en
Publication of CN112541934A publication Critical patent/CN112541934A/en
Application granted granted Critical
Publication of CN112541934B publication Critical patent/CN112541934B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The application discloses an image processing method and device, electronic equipment and a non-transitory computer readable storage medium storing computer instructions, and relates to the field of image processing. The specific implementation scheme is as follows: carrying out significance detection on the picture, and determining a target area corresponding to a target object in the picture; determining the area of a target area expected to be lost after cutting based on the target cutting size; determining a clipping mode aiming at the picture based on the area of a target region expected to be lost after clipping; and cutting the picture based on the determined cutting mode to obtain the cut picture.

Description

Image processing method and device
Technical Field
The application relates to the field of information processing, in particular to the field of image processing, and provides an image processing method and device, an electronic device and a non-transitory computer readable storage medium storing computer instructions.
Background
In order to convey information to the user neatly, the requirement for the display picture scale is fixed, and it is a very important issue to cut any picture size to a fixed size while retaining the theme. In the related art, there are two main approaches: (1) scaling the original picture according to the target size; (2) and (4) clipping after the theme is selected based on the detection method.
However, the scheme (1) is only suitable for cutting the equal-proportion pictures, otherwise, the obtained pictures can seriously deform the theme and influence the picture display effect; the effect of clipping in the scheme (2) is strongly associated with the picture content, and when the picture content does not belong to the preset category of the detector, the clipping effect is poor or fails. Therefore, the prior art cannot provide a cutting processing method which better meets the actual requirements and can ensure the cutting effect.
Disclosure of Invention
The application provides an image processing method and device, an electronic device and a non-transitory computer readable storage medium storing computer instructions.
The embodiment of the application provides an image processing method, which comprises the following steps:
carrying out significance detection on the picture, and determining a target area corresponding to a target object in the picture;
determining the area of a target area expected to be lost after cutting based on the target cutting size;
determining a clipping mode aiming at the picture based on the area of a target region expected to be lost after clipping;
and cutting the picture based on the determined cutting mode to obtain the cut picture.
Optionally, the determining, based on the body area lost by clipping, a clipping manner for the picture includes:
when the area of the target area expected to be lost after cutting is larger than a preset threshold value, determining to adopt a cutting mode based on mask information;
and when the area of the target area expected to be lost after cutting is not larger than a preset threshold value, determining to adopt a direct cutting mode.
Optionally, the cropping the picture based on the determined cropping mode to obtain the cropped picture includes:
when the determined cutting mode is a cutting mode based on mask information, cutting the picture based on the mask information and a seam cutting mode to obtain a cut picture;
and cutting the picture in a seam cutting mode based on the mask information to obtain the cut picture.
Optionally, the method further includes:
determining the mask information based on a target area corresponding to a target object in the picture determined by the significance detection.
An embodiment of the present application further provides an image processing apparatus, where the apparatus includes:
the saliency detection unit is used for carrying out saliency detection on the picture and determining a target area corresponding to a target object in the picture;
the processing unit is used for determining the area of a target area expected to be lost after cutting based on the target cutting size; and determining a cutting mode aiming at the picture based on the area of the target area expected to be lost after cutting, and cutting the picture based on the determined cutting mode to obtain the cut picture.
Optionally, the processing unit is configured to determine to use a clipping manner based on mask information when the area of the target region expected to be lost after clipping is greater than a preset threshold; and when the area of the target area expected to be lost after cutting is not larger than a preset threshold value, determining to adopt a direct cutting mode.
Optionally, the processing unit is configured to, when the determined cropping mode is a mask information-based cropping mode, crop the picture based on the mask information and a seam cropping mode, so as to obtain a cropped picture; and cutting the picture in a seam cutting mode based on the mask information to obtain the cut picture.
Optionally, the processing unit is configured to determine the mask information based on a target area corresponding to a target object in the picture determined by the saliency detection.
An embodiment of the present application further provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the preceding claims.
The present application also provides a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any of the foregoing.
One embodiment in the above application has the following advantages or benefits: and predicting the lost main body area according to the main body area of the picture and the target cropping size, and further selecting a cropping mode based on the predicted lost main body area. Therefore, a proper cutting mode can be selected according to actual requirements, and the integrity of the cut picture main body can be ensured.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a first schematic diagram of a flow of an image processing method of the present application;
FIG. 2 is a second schematic diagram of the flow of the image processing method of the present application;
FIG. 3 is a schematic diagram of the structure of the image processing apparatus of the present application;
fig. 4 is a block diagram of an electronic device for implementing an image processing method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The present application provides an image processing method, as shown in fig. 1, including:
s101: carrying out significance detection on the picture, and determining a target area corresponding to a target object in the picture;
s102: determining the area of a target area expected to be lost after cutting based on the target cutting size;
s103: determining a clipping mode aiming at the picture based on the area of a target region expected to be lost after clipping;
s104: and cutting the picture based on the determined cutting mode to obtain the cut picture.
The embodiments of the present application can be applied to an electronic device with a processing function, for example, the electronic device may be a smart phone, a tablet computer, a Personal Computer (PC), a server, and the like, which are not exhaustive here.
In the step S101, the saliency detection is performed on the picture, which is a method for extracting the region of interest in the image by simulating the human visual characteristics, and the saliency detection can effectively obtain the salient object in the picture.
In one example, the significance detection may be:
acquiring the significance value of each pixel point in the picture; acquiring N significant regions of the picture from the significant image according to significant pixel points in the significant image; n is an integer greater than or equal to 1; and acquiring the significance of the picture according to the N significant areas.
The N salient regions may be target regions corresponding to the target object in this embodiment. The target area may be composed of position information corresponding to at least one pixel, for example, N target objects may be detected, where one target object is represented in the picture by using the position information of the corresponding pixel, and the position information of all pixels of the target object may be understood as the target area.
In addition, the method provided by the embodiment may further include: determining the mask information based on a target area corresponding to a target object in the picture determined by the significance detection.
The mask information may be a binary code, and at least one pixel of a target region corresponding to a target object is set to 1, and at least one pixel except for the target region corresponding to the target object is set to 0.
That is, before S102, a target cropping size, or a target cropping ratio, set by the user may be acquired.
The target cropping size may be a length and a width directly selected by a user after cropping, and a unit may be a pixel, for example, 900 × 500.
Or, the target clipping ratio may be selected by the user, for example, the user selects from a plurality of clipping ratios to obtain the target clipping ratio, and the plurality of clipping ratios may be: 4:3, 16:9, and so on. In this case, the target cropping size may be determined according to the selected target cropping ratio.
In this embodiment, the selection of the cutting manner is performed by constructing the cutting length (i.e., the length to be cut) and the cutting subject area (i.e., the significant pixels that the cut portion may contain, i.e., the area that contains how much of the target area expected to be lost after cutting); that is, the target region expected to be lost after trimming may be understood as a target region expected to be lost after trimming, which is obtained by estimating a partial region to be trimmed based on the trimming ratio when the user selects the trimming ratio and determining a target region included in the partial region expected to be trimmed.
The index is obtained by performing line-column scanning on the top, bottom, left and right of the mask information determined by the picture saliency detection, and the area of the lost target region under a plurality of cutting sizes is stored, for example, in one cutting size, 100 lines need to be cut in the line direction, and the index stores the area of the lost minimum target region when the 100 lines are cut in the line direction.
Further, whether or not direct cropping is possible may be determined based on the area of the target region expected to be lost after such cropping and a preset threshold.
Further, the foregoing S103 is used for selecting a cutting mode, and specifically includes:
the determining a cropping mode for the picture based on the predicted lost subject area for cropping comprises:
when the area of the target area expected to be lost after cutting is larger than a preset threshold value, determining to adopt a cutting mode based on mask information;
and when the area of the target area expected to be lost after cutting is not larger than a preset threshold value, determining to adopt a direct cutting mode.
And further, when the area of the target area expected to be lost after cutting is not larger than a preset threshold value, determining to adopt a direct cutting mode.
Because the information of the original image can be retained to the maximum extent by the direct cutting mode, and the cutting is more efficient, if the area of the target area expected to be lost after the cutting is smaller, namely smaller than the preset threshold value, the target area lost after the cutting can be considered to be in the tolerance range, so the direct cutting mode can be selected in this case;
on the other hand, if the area of the target region expected to be lost after clipping is large, the direct clipping cannot be used, which may cause a problem of clipping more target regions of the target object, and therefore, a Seam clipping (team-clipping) method needs to be selected for clipping.
Wherein the preset threshold may be determined according to traffic. For example, a user may tolerate a service with a large area of a lost target region, and may set a preset threshold to be large; conversely, if the user can tolerate a service with a smaller area of the target area of loss, the preset threshold may be set smaller.
In the foregoing S104, the cutting the picture based on the determined cutting manner to obtain a cut picture includes:
when the determined cutting mode is a cutting mode based on mask information, cutting the picture based on the mask information and a seam cutting mode to obtain a cut picture;
and cutting the picture in a seam cutting mode based on the mask information to obtain the cut picture.
Specifically, the cropping the picture based on the mask information and the seam cropping mode to obtain the cropped picture may be:
and cutting the picture based on the target cutting size and the mask information and a seam cutting mode to obtain the cut picture.
In order to solve the problem, in this embodiment, the mask information corresponding to the significance result of the picture is used as a clipping mask of the team-carving, so that the deleted and added team line does not pass through a main body mask area, and the integrity of the main body area is ensured.
In this embodiment, the process of performing cropping by using mask information corresponding to a saliency result of a picture as a cropping mask of the seam-carving may include:
firstly, using image pixel gradient (pixel neighborhood difference summation) as an energy map of an image;
then, the minimum energy line in the energy map is found. Assuming that a minimum energy line is searched in the vertical direction from the upper boundary of the image, taking the upper boundary point P (x, y) as the starting point of the deletion line, respectively calculating the sum of three neighboring points P (x, y +1), P (x-1, y +1), P (x +1, y +1) and taking the minimum point as the next deletion point until the lower boundary is reached, so that the connected line is the minimum energy line with P (x, y) as the starting point, and the global minimum energy line is the minimum energy line among all the minimum energy lines with the upper boundary point as the starting point.
When adding the saliency mask, the minimum energy line is determined according to whether the next point is in the body mask when searching for the minimum energy line, for example, if the three neighborhood point p (x, y +1) of p (x, y) is the next point with the minimum energy, but if the point hits the body mask, the point is not selected, but the next minimum energy point is selected as the next point in turn. The process is very computationally intensive and direct computation is difficult to engineer, so dynamic programming methods are used here to optimize the computation.
And finally, deleting the minimum energy line, reducing one row or one column of pixels of the whole picture, and performing average value replacement on points above, below or on the left and right of the reduced pixels, so that the smoothness of the image can be improved.
To describe the scheme provided in this embodiment with reference to fig. 2, first, saliency calculation is performed on an original image, that is, an original picture, to obtain a target area corresponding to a target object, and corresponding mask information is generated; determining a target cutting size based on the cutting proportion selected by the user; determining a predicted lost body area based on the target crop size; if the predicted lost main body area is smaller than the preset threshold value, adopting a direct cutting mode to finally obtain a diagram with the cutting ratio of 4:3 as shown in the figure 2; and if the predicted lost main body area is larger than the preset threshold value, adopting a seam cutting mode based on mask information to obtain a graph with a cutting ratio of 3:2 or a graph with a cutting ratio of 16: 9.
Therefore, by adopting the scheme, the lost main body area can be predicted according to the main body area of the picture and the target cropping size, and the cropping mode can be selected based on the predicted lost main body area. Therefore, a proper cutting mode can be selected according to actual requirements, and the integrity of the cut picture main body can be ensured.
The present application provides an image processing apparatus, as shown in fig. 3, comprising:
the saliency detection unit 31 is configured to perform saliency detection on an image and determine a target area corresponding to a target object in the image;
a processing unit 32, configured to determine, based on the target cropping size, an area of a target region expected to be lost after cropping; and determining a cutting mode aiming at the picture based on the area of the target area expected to be lost after cutting, and cutting the picture based on the determined cutting mode to obtain the cut picture.
The processing unit 32 is configured to determine to adopt a clipping mode based on mask information when the area of the target region expected to be lost after clipping is larger than a preset threshold; and when the area of the target area expected to be lost after cutting is not larger than a preset threshold value, determining to adopt a direct cutting mode.
The processing unit 32 is configured to, when the determined cropping mode is a cropping mode based on mask information, crop the picture based on the mask information and a seam cropping mode to obtain a cropped picture; and cutting the picture in a seam cutting mode based on the mask information to obtain the cut picture.
The processing unit 32 is configured to determine the mask information based on a target region corresponding to a target object in the picture determined by the saliency detection.
Therefore, by adopting the scheme, the lost main body area can be predicted according to the main body area of the picture and the target cropping size, and the cropping mode can be selected based on the predicted lost main body area. Therefore, a proper cutting mode can be selected according to actual requirements, and the integrity of the cut picture main body can be ensured.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 4, it is a block diagram of an electronic device according to the image processing method of the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 4, the electronic apparatus includes: one or more processors 801, memory 802, and interfaces for connecting the various components, including a high speed interface and a low speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display Graphical information for a Graphical User Interface (GUI) on an external input/output device, such as a display device coupled to the Interface. In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 4, a processor 801 is taken as an example.
The memory 802 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the image processing method provided by the present application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the image processing method provided by the present application.
The memory 802, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the image processing method in the embodiments of the present application. The processor 801 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 802, that is, implements the image processing method in the above-described method embodiment.
The memory 802 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device for image processing, and the like. Further, the memory 802 may include high speed random access memory and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 802 optionally includes memory located remotely from the processor 801, which may be connected to image processing electronics over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the image processing method may further include: an input device 803 and an output device 804. The processor 801, the memory 802, the input device 803, and the output device 804 may be connected by a bus or other means, as exemplified by the bus connection in fig. 4.
The input device 803 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the image processing electronic apparatus, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input device. The output devices 804 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The Display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) Display, and a plasma Display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, Integrated circuitry, Application Specific Integrated Circuits (ASICs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (Cathode Ray Tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, a part of images with larger similarity are filtered when the images are stored, so that redundant images are filtered when the images are stored, the storage and transmission pressure is reduced, and the workload of subsequent sorting is also saved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An image processing method, comprising:
carrying out significance detection on the picture, and determining a target area corresponding to a target object in the picture;
determining the area of a target area expected to be lost after cutting based on the target cutting size;
determining a clipping mode aiming at the picture based on the area of a target region expected to be lost after clipping;
and cutting the picture based on the determined cutting mode to obtain the cut picture.
2. The method of claim 1, wherein determining the cropping mode for the picture based on the subject area of the cropping loss comprises:
when the area of the target area expected to be lost after cutting is larger than a preset threshold value, determining to adopt a cutting mode based on mask information;
and when the area of the target area expected to be lost after cutting is not larger than a preset threshold value, determining to adopt a direct cutting mode.
3. The method according to claim 1, wherein the cropping the picture based on the determined cropping mode to obtain a cropped picture comprises:
when the determined cutting mode is a cutting mode based on mask information, cutting the picture based on the mask information and a seam cutting mode to obtain a cut picture;
and cutting the picture in a seam cutting mode based on the mask information to obtain the cut picture.
4. The method of claim 1, further comprising:
determining the mask information based on a target area corresponding to a target object in the picture determined by the significance detection.
5. An image processing apparatus, characterized in that the apparatus comprises:
the saliency detection unit is used for carrying out saliency detection on the picture and determining a target area corresponding to a target object in the picture;
the processing unit is used for determining the area of a target area expected to be lost after cutting based on the target cutting size; and determining a cutting mode aiming at the picture based on the area of the target area expected to be lost after cutting, and cutting the picture based on the determined cutting mode to obtain the cut picture.
6. The device according to claim 5, wherein the processing unit is configured to determine to adopt a clipping manner based on mask information when the area of the target region expected to be lost after clipping is larger than a preset threshold; and when the area of the target area expected to be lost after cutting is not larger than a preset threshold value, determining to adopt a direct cutting mode.
7. The apparatus according to claim 5, wherein the processing unit is configured to, when the determined cropping mode is a cropping mode based on mask information, crop the picture based on the mask information and a seam cropping mode, resulting in a cropped picture; and cutting the picture in a seam cutting mode based on the mask information to obtain the cut picture.
8. The apparatus according to claim 5, wherein the processing unit is configured to determine the mask information based on a target region corresponding to a target object in the picture determined by the saliency detection.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-4.
CN201910894294.XA 2019-09-20 2019-09-20 Image processing method and device Active CN112541934B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910894294.XA CN112541934B (en) 2019-09-20 2019-09-20 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910894294.XA CN112541934B (en) 2019-09-20 2019-09-20 Image processing method and device

Publications (2)

Publication Number Publication Date
CN112541934A true CN112541934A (en) 2021-03-23
CN112541934B CN112541934B (en) 2024-02-27

Family

ID=75012444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910894294.XA Active CN112541934B (en) 2019-09-20 2019-09-20 Image processing method and device

Country Status (1)

Country Link
CN (1) CN112541934B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569872A (en) * 2021-08-10 2021-10-29 大连海事大学 Multi-resolution shoe wearing footprint sequence identification method based on pressure significance

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110305397A1 (en) * 2010-03-08 2011-12-15 Robinson Piramuthu Systems and methods for retargeting an image utilizing a saliency map
TW201413600A (en) * 2012-09-17 2014-04-01 Thomson Licensing Device and method for detecting the presence of a logo in a picture
CN103914689A (en) * 2014-04-09 2014-07-09 百度在线网络技术(北京)有限公司 Picture cropping method and device based on face recognition
CN106663208A (en) * 2014-08-28 2017-05-10 高通股份有限公司 Temporal saliency map
CN108062739A (en) * 2017-11-02 2018-05-22 广东数相智能科技有限公司 A kind of picture intelligence method of cutting out and device based on body position
CN109146892A (en) * 2018-07-23 2019-01-04 北京邮电大学 A kind of image cropping method and device based on aesthetics
CN109325484A (en) * 2018-07-30 2019-02-12 北京信息科技大学 Flowers image classification method based on background priori conspicuousness
CN110136142A (en) * 2019-04-26 2019-08-16 微梦创科网络科技(中国)有限公司 A kind of image cropping method, apparatus, electronic equipment
CN111461967A (en) * 2020-04-01 2020-07-28 北京字节跳动网络技术有限公司 Picture processing method, device, equipment and computer readable medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110305397A1 (en) * 2010-03-08 2011-12-15 Robinson Piramuthu Systems and methods for retargeting an image utilizing a saliency map
TW201413600A (en) * 2012-09-17 2014-04-01 Thomson Licensing Device and method for detecting the presence of a logo in a picture
CN103914689A (en) * 2014-04-09 2014-07-09 百度在线网络技术(北京)有限公司 Picture cropping method and device based on face recognition
CN106663208A (en) * 2014-08-28 2017-05-10 高通股份有限公司 Temporal saliency map
CN108062739A (en) * 2017-11-02 2018-05-22 广东数相智能科技有限公司 A kind of picture intelligence method of cutting out and device based on body position
CN109146892A (en) * 2018-07-23 2019-01-04 北京邮电大学 A kind of image cropping method and device based on aesthetics
CN109325484A (en) * 2018-07-30 2019-02-12 北京信息科技大学 Flowers image classification method based on background priori conspicuousness
CN110136142A (en) * 2019-04-26 2019-08-16 微梦创科网络科技(中国)有限公司 A kind of image cropping method, apparatus, electronic equipment
CN111461967A (en) * 2020-04-01 2020-07-28 北京字节跳动网络技术有限公司 Picture processing method, device, equipment and computer readable medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CZL389: "图像接缝裁剪(seam carving)算法实现-SIGGRAPH 2007", pages 1 - 5, Retrieved from the Internet <URL:https://blog.csdn.net/czl389/article/details/53746668> *
林晓;张晓煜;马利庄;: "基于缝裁剪和变形的图像缩放方法", 计算机科学, no. 09, pages 295 - 298 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569872A (en) * 2021-08-10 2021-10-29 大连海事大学 Multi-resolution shoe wearing footprint sequence identification method based on pressure significance
CN113569872B (en) * 2021-08-10 2024-03-19 大连海事大学 Multi-resolution shoe-wearing footprint sequence identification method based on pressure significance

Also Published As

Publication number Publication date
CN112541934B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
CN110659600B (en) Object detection method, device and equipment
CN111462174B (en) Multi-target tracking method and device and electronic equipment
KR102432561B1 (en) Edge-based three-dimensional tracking and registration method and apparatus for augmented reality, and electronic device
CN111967297A (en) Semantic segmentation method and device for image, electronic equipment and medium
CN111415298A (en) Image splicing method and device, electronic equipment and computer readable storage medium
CN111353419A (en) Image comparison method and device, electronic equipment and storage medium
CN111966925B (en) Building interest point weight judging method and device, electronic equipment and storage medium
CN110633717A (en) Training method and device for target detection model
CN111601013B (en) Method and apparatus for processing video frames
CN111861991A (en) Method and device for calculating image definition
CN112085814A (en) Electronic map display method, device, equipment and readable storage medium
CN111191619A (en) Method, device and equipment for detecting virtual line segment of lane line and readable storage medium
CN111178137A (en) Method, device, electronic equipment and computer readable storage medium for detecting real human face
CN110532415A (en) Picture search processing method, device, equipment and storage medium
CN111782633B (en) Data processing method and device and electronic equipment
CN112541934B (en) Image processing method and device
CN111696134A (en) Target detection method and device and electronic equipment
JP7269979B2 (en) Method and apparatus, electronic device, computer readable storage medium and computer program for detecting pedestrians
CN111860226B (en) Method, apparatus, device and storage medium for image processing
US11488384B2 (en) Method and device for recognizing product
CN111753960B (en) Model training and image processing method and device, electronic equipment and storage medium
CN112150380A (en) Method and device for correcting image, electronic equipment and readable storage medium
CN113362438A (en) Panorama rendering method, device, electronic apparatus, medium, and program
CN112381877A (en) Positioning fusion and indoor positioning method, device, equipment and medium
CN112015767A (en) Data thinning method, data thinning apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant