CN111461965A - Picture processing method and device, electronic equipment and computer readable medium - Google Patents

Picture processing method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN111461965A
CN111461965A CN202010251636.9A CN202010251636A CN111461965A CN 111461965 A CN111461965 A CN 111461965A CN 202010251636 A CN202010251636 A CN 202010251636A CN 111461965 A CN111461965 A CN 111461965A
Authority
CN
China
Prior art keywords
cutting
picture
information
target picture
clipping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010251636.9A
Other languages
Chinese (zh)
Other versions
CN111461965B (en
Inventor
周杰
李可
许世坤
王长虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010251636.9A priority Critical patent/CN111461965B/en
Publication of CN111461965A publication Critical patent/CN111461965A/en
Application granted granted Critical
Publication of CN111461965B publication Critical patent/CN111461965B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Abstract

The embodiment of the disclosure discloses a picture processing method and device, an electronic device and a computer readable medium. One embodiment of the method comprises: detecting an object displayed in a target picture to obtain object information, wherein the object information comprises range information and object type information used for representing the display range of the object; according to the object information, constructing a score map, wherein the numerical values at the corresponding positions of the display ranges of the objects of different classes in the score map are different; determining a cutting area according to the sum of numerical values at corresponding positions of display ranges of different objects represented by different object category information; and cutting the target picture according to the picture cutting information and the score map to generate a cut picture. The cutting mode distinguished according to the importance degree of the image cutting method improves the pertinence of image cutting and improves the user experience.

Description

Picture processing method and device, electronic equipment and computer readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a picture processing method, a picture processing device, electronic equipment and a computer readable medium.
Background
The picture processing technology can comprise that a computer or a terminal automatically cuts pictures according to the cutting proportion required by a user. Compared with the traditional picture cutting, the method saves the operation step of intercepting by a user, thereby simplifying the cutting step.
However, the importance of the foreground object may be different for different pictures, and the cropping result of the automatic cropping may not be different according to the importance of the foreground object, and thus it is difficult to meet the user requirement.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a picture processing method, apparatus, electronic device and computer readable medium to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a picture processing method, including: detecting an object displayed in a target picture to obtain object information, wherein the object information comprises range information and object type information used for representing the display range of the object; according to the object information, constructing a score map, wherein the numerical values at the corresponding positions of the display ranges of the objects of different classes in the score map are different; determining a cutting area according to the sum of numerical values at corresponding positions of display ranges of different objects represented by different object category information; and cutting the target picture according to the picture cutting information and the score map to generate a cut picture.
In a second aspect, some embodiments of the present disclosure provide a picture processing apparatus, the apparatus including: the detection unit is configured to detect an object displayed in the target picture to obtain object information, wherein the object information comprises range information and object category information used for representing the display range of the object; a construction unit configured to construct a score map in which numerical values at corresponding positions of display ranges of objects of different categories are different, according to the object information; the determining unit is configured to determine a cutting area according to the sum of numerical values at positions corresponding to display ranges of different objects represented by different object category information; and the first generating unit is configured to cut the target picture according to the picture cutting information and the score map, and generate a cut picture.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method as in any one of the first aspects.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements the method as in any one of the first aspect.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: first, object information, which may include range information and object type information, can be obtained by detecting an object displayed in a target picture. And then, constructing a score map according to the object information. And finally, according to the picture cutting information and the scoring graph, a processing picture required by a user can be obtained. Specifically, the score map can represent the importance of each displayed object in the target picture, and the target picture is cut according to the importance. The cutting mode which is distinguished according to the importance degree improves the pertinence of picture cutting and also improves the user experience.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
1-4 are schematic diagrams of one application scenario of a picture processing method according to some embodiments of the present disclosure;
fig. 5 is a flow diagram of some embodiments of a picture processing method according to the present disclosure;
FIG. 6 is a flow diagram of further embodiments of a picture processing method according to the present disclosure;
FIG. 7 is a schematic block diagram of some embodiments of a picture processing apparatus according to the present disclosure;
FIG. 8 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1-4 are schematic diagrams of an application scenario of a picture processing method according to some embodiments of the present disclosure.
As shown in fig. 1, a user may first determine a target picture 103 on a page 102 displayed by a computing device 101.
As shown in fig. 2, the computing device 101 detects an object displayed in the target picture 103 determined by the user to obtain object information. For example, by detecting the "face" object and the "text" object displayed in the target picture 103, range information 104 of the "face" object and range information 105 of the "text" object are obtained. In addition, two different object classes, namely a "face class" and a "text class", are available.
As shown in fig. 3, the computing device 101 constructs a "face" score map 106 and a "text" score map 107 from the range information 104 of the "face category", "text category", "face" object and the range information 105 of the "text" object described above. The size of the "face" score map 106 is adapted to the display range of the "face" object, and similarly, the size of the "text" score map 107 is adapted to the display range of the "text" object. Further, the display range of the "face" object and the display range of the "text" object may be "100" and "40", respectively. As an example, the number of the above-described numerical values may be the same as the number of pixels within the display range of the object.
As shown in fig. 4, the computing device 101 determines the sum of the values in the "face" score map 106 and the "text" score map 107, as "1600" and "360", respectively. Further, the display range 104 of the "face" object represented by the "face" score map 107 having the largest sum of numerical values is determined as the clipping region. Finally, the target picture 103 is cropped according to the picture cropping information and the cropping area, and a cropped picture 108 is generated. As an example, after the user inputs the length and width of the cut, the computing device 101 adjusts the cut region upward, downward, leftward, and/or rightward, such that the adjusted cut region conforms to the length and width of the cut input by the user. Finally, the adjusted clipping region is clipped, and a clipped picture 108 is generated.
It is understood that the computing device 101 of the method for processing pictures may be a terminal device, a server, a device formed by integrating the terminal device and the server through a network, or various software. By way of example, computing device 101 may be a variety of electronic devices with information processing capabilities including, but not limited to, smart phones, tablets, e-book readers, laptop portable computers, desktop computers, and the like. When the execution subject is software, the software can be installed in the electronic device listed above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices and servers in fig. 1 is merely illustrative. There may be any number of terminal devices and servers, as desired for implementation.
With continuing reference to fig. 5, a flow 500 of some embodiments of a picture processing method according to the present disclosure is shown. The picture processing method comprises the following steps:
step 501, detecting an object displayed in a target picture to obtain object information.
In some embodiments, an execution subject of the picture processing method (for example, the terminal device 101 shown in fig. 1) may detect an object displayed in a target picture selected by a user through a deep learning detection method, so as to obtain object information. The execution main body can also detect an object displayed in a target picture selected by a user through a deep learning image segmentation network to obtain object information.
The Deep learning detection method may include, but is not limited to, at least one of an SSD algorithm (Single Shell MultiBox Detector), an R-CNN algorithm (Region-conditional neural Networks, target detection algorithm), a Fast R-CNN algorithm (Fast Region-conditional neural Networks, target detection algorithm), an SPP-NET algorithm (Spatial neural Networks, target detection algorithm), a YO L O algorithm (Young On L One, target detection algorithm), an FPN algorithm (Feature neural Networks, target detection algorithm), a DCN algorithm (Deformable ConvNet, variable convolution algorithm), a RetinaNet target detection algorithm.
Specifically, the object information may include range information and object category information for characterizing a display range of the object. The object may be a person, an object, an animal, a character, or the like. The object type information may be a type of the person, object and character representation, such as a face, a commodity, a character, and the like. It should be noted that the target picture may be a picture designated by a user or determined by default settings of the computing device. In addition, the target picture may be a picture selected locally by the user, or a picture downloaded from a network.
Step 502, according to the object information, a score map is constructed.
In some embodiments, the execution subject may construct a score map having the same size as the target picture. Specifically, the positions of the numerical values in the score map may correspond to the positions of the pixel points in the target picture one to one. In this way, the number of pixels included in the target picture in the width direction or the height direction corresponds to the number of values in the score map. Further, different numerical values can be set at positions corresponding to the display ranges of the objects of different classes according to the object class information. As an example, the above object class information may characterize "face" and "text". Alternatively, the execution subject may set a numerical value at a position corresponding to a display range where the "face" is located. For example, the value may be "100". The numerical value at the corresponding position of the display range where the "text" is located is set, and for example, the numerical value may be "40". The numerical value at the corresponding position of the display range where the non-object is located may be "0". It should be noted that the execution subject may perform default setting on the numerical value at the corresponding position of the display range where the "face" or the "text" is located, or may perform setting by a user. Specifically, the numerical value set by the user may be acquired by detecting an input operation of the user on the display screen.
Further, the execution subject may construct an object class score map according to values included in the range information of different object classes. As an example, the description is made in conjunction with fig. 3. As shown in fig. 3, the object categories include a "face category" and a "text category". The execution subject constructs a "face" score map 106 from the values included in the display range of the "face" object. The execution body constructs a "text" score map 107 from the numerical values included in the display range of the "text" object. The size of the "face" score map 106 is adapted to the display range of the "face" object, and similarly, the size of the "text" score map 107 is adapted to the display range of the "text" object.
Step 503, determining a cutting area according to the sum of numerical values at corresponding positions of display ranges of different objects represented by different object class information;
in some embodiments, the values in the object class score map are summed. And determining the display range of the object represented by the object class score map with the maximum sum of the numerical values as a clipping area. This is explained with reference to fig. 4. As shown in fig. 4, the sum of the values in the "face" score map 106 and the "text" score map 107 is "1600" and "360", respectively. Further, the display range 104 of the "face" object represented by the "face" score map 107 having the largest sum of numerical values is determined as the clipping region.
And step 504, according to the picture cutting information and the cutting area, cutting the target picture to generate a cut picture.
In some embodiments, the picture cropping information may include, but is not limited to, a cropping scale, a cropping width, and/or a cropping height. And adjusting the cutting area according to the determined cutting information. The cropped picture may be obtained by cropping the target picture in the cropping area. Or the sum of the numerical values included in the cut picture is maximum. The image cropping information may be set by the execution main body by default, or may be set by a user. Specifically, the numerical value set by the user may be acquired by detecting an input operation of the user on the display screen.
In the picture processing method disclosed in some embodiments of the present disclosure, first, object information can be obtained by detecting an object displayed in a target picture, where the object information may include range information and object type information. And then, constructing a score map according to the object information. Then, determining a cutting area according to the sum of numerical values at corresponding positions of display ranges of different objects represented by different object class information; and finally, according to the picture cutting information and the cutting area, a processing picture required by a user can be obtained. Specifically, the score map can represent the importance of each displayed object in the target picture, and the target picture is cut according to the importance. Therefore, the pertinence of picture cutting is improved, and the user experience is improved.
With further reference to fig. 6, a flow 600 of further embodiments of a picture processing method is shown. The process 600 of the image processing method includes the following steps:
step 601, detecting an object displayed in the target picture to obtain object information.
Step 602, according to the object information, a score map is constructed.
Step 603, determining a cutting area according to the sum of numerical values at corresponding positions of display ranges of different objects represented by different object class information;
in some embodiments, the specific implementation and technical effects of steps 601-603 may refer to steps 501-503 in those embodiments corresponding to fig. 5, which are not described herein again.
At step 604, a direction score vector associated with the direction is generated.
In some embodiments, the direction may be a first direction or a second direction perpendicular to the first direction. The execution subject of the picture processing method (for example, the terminal device 101 shown in fig. 1) may sum up in the above score map in the first direction or the second direction, generating a direction score vector relating to the direction. Specifically, the number of elements of the direction score vector may be equal to the number of pixels of the target picture in the direction. The element values of the elements of the direction score vector are generated by starting from positions corresponding to the elements in the score map and summing in a direction perpendicular to the direction. As an example, the description is made in conjunction with fig. 3. As shown in fig. 3, the first direction may be a horizontal direction, and if the numerical values in the vertical direction of the display range of the "face" in the score map are four 100, the element value of the direction score vector is 400. The number of pixels of the above "face" in the horizontal direction is 4, and the number of elements of the direction score vector is equal to 4.
Step 605, determining a direction cropping width related to the direction according to the picture cropping information.
In some embodiments, the above-mentioned picture cropping information may include, but is not limited to, a cropping width and a cropping scale. The execution body further determines the directional cutting width according to the cutting width and the direction. The execution body can also determine the cutting width according to the cutting proportion. As an example, the picture cropping scale is wide: height 1:2, original picture size width: and when the height is 200mm to 100mm, the picture cutting width is 100mm/2 x 1 to 50 mm. In this way, the cutting width in the direction is determined according to the direction.
Step 606, determining a first clipping position and a second clipping position of the target picture in the direction.
In some embodiments, the execution subject may determine a section of the direction score vector in which the clipping region is located. The execution body may determine the first cutting position and the second cutting position by extending the section in a direction opposite to the direction according to the cutting width in the direction. For example, this is explained in conjunction with fig. 3. As shown in fig. 3, the face object score map 106 is determined as a clipping region. The segment of the direction score vector in which the above-mentioned clipping region is located may be [400,400,400,400 ]. And responding to the fact that the width of the section representation is smaller than the cutting width in the direction, further, extending the width of the section representation along the left and right directions, enabling the adjusted width of the section representation to be consistent with the cutting width in the direction, and finally determining the first cutting position and the second cutting position.
In an optional implementation manner of some embodiments, the execution body may further determine the first clipping position and the second clipping position according to a sum of the directional clipping width and element values of elements of the directional score vector included in the first clipping position and the second clipping position. Specifically, the execution body may determine a first cutting position and a second cutting position to be adjusted in the above direction according to the above direction cutting width. Next, the first clipping position and the second clipping position to be adjusted are adjusted so that the sum of the element values of the elements of the direction score vector included in the first clipping position and the second clipping position is maximum. And finally, determining a first cutting position and a second cutting position. And step 607, cutting the target picture according to the first cutting position and the second cutting position to generate a cut picture.
In some embodiments, the execution body cuts the target picture according to the areas determined by the first cutting position and the second cutting position, so as to generate a cut picture.
In some embodiments of the present disclosure, an image processing method first detects an object displayed in a target image to obtain object information. And then constructing a score map according to the object information. The clipping region is then determined. A direction score vector associated with the direction may then be generated. Further, a direction cropping width associated with the direction is determined based on the picture cropping information. Then, a first clipping position and a second clipping position of the target picture in the direction are determined. And finally, cutting the target picture according to the first cutting position and the second cutting position to generate a cut picture. The first cutting position and the second cutting position are determined through the cutting information and the generated direction score vector, the cutting width in the direction can be accurately determined according to the importance of different object representations, and the cutting accuracy is improved. And then can accurately satisfy user's tailor demand to user experience has been promoted.
With further reference to fig. 7, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a picture processing apparatus, which correspond to those shown in fig. 5, and which may be applied in various electronic devices in particular.
As shown in fig. 7, a picture processing apparatus 700 of some embodiments includes: a detection unit 701, a construction unit 702, a determination unit 703, and a first generation unit 704. The detection unit 701 is configured to detect an object displayed in a target picture to obtain object information, where the object information includes range information and object category information used for representing a display range of the object; a construction unit 702 configured to construct a score map according to the object information; a determining unit 703 configured to determine a clipping region according to a sum of values at positions corresponding to display ranges where different objects represented by different object category information are located; and a first generating unit 704 configured to crop the target picture according to the picture cropping information and the score map, and generate a cropped picture.
In an optional implementation manner of some embodiments, the picture processing apparatus 700 further includes: a second generation unit configured to generate a direction score vector relating to a direction, the direction being a first direction or a second direction perpendicular to the first direction, wherein the number of elements of the direction score vector is equal to the number of pixel points of the target picture in the direction, and element values of the elements of the direction score vector are generated by summing up in a direction perpendicular to the direction from a position corresponding to the element in the score map.
In an optional implementation manner of some embodiments, the first generating unit 703 in the picture processing apparatus 700 is further configured to: determining a direction cutting width related to the direction according to the picture cutting information; extending the cutting area along the direction and the direction opposite to the direction according to the direction cutting width so as to determine a first cutting position and a second cutting position of the target picture in the direction; and cutting the target picture according to the first cutting position and the second cutting position to generate a cut picture.
In an optional implementation manner of some embodiments, the first generating unit 703 in the picture processing apparatus 700 is further configured to: determining a direction cutting width related to the direction according to the picture cutting information; determining the first clipping position and the second clipping position according to the direction clipping width and the sum of element values of elements of the direction score vector included in a first clipping position and a second clipping position of the target picture in the direction; and cutting the target picture according to the first cutting position and the second cutting position to generate a cut picture.
In some embodiments, specific implementations of the detecting unit 701, the constructing unit 702, the determining unit 703 and the first generating unit 704 included in the image processing apparatus 700 and technical effects brought by the specific implementations may refer to the embodiment corresponding to fig. 5, and are not described herein again.
Referring now to FIG. 8, a block diagram of an electronic device (e.g., computing device 101 of FIG. 1)800 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device in some embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, an electronic device 800 may include a processing means (e.g., central processing unit, graphics processor, etc.) 801 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc., output devices 807 including, for example, a liquid crystal display (L CD), speaker, vibrator, etc., and communication devices 809, communication devices 809 may allow electronic device 800 to communicate wirelessly or wiredly with other devices to exchange data although FIG. 8 illustrates electronic device 800 as having various devices, it is to be understood that not all of the illustrated devices are required to be implemented or provided.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through communications device 809, or installed from storage device 808, or installed from ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). examples of communications networks include local area networks ("L AN"), wide area networks ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: detecting an object displayed in a target picture to obtain object information, wherein the object information comprises range information and object type information used for representing the display range of the object; according to the object information, constructing a score map, wherein the numerical values at the corresponding positions of the display ranges of the objects of different classes in the score map are different; determining a cutting area according to the sum of numerical values at corresponding positions of display ranges of different objects represented by different object category information; and cutting the target picture according to the picture cutting information and the score map to generate a cut picture.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language, or similar programming languages.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes a detection unit, a construction unit, and a first generation unit. The names of these units do not form a limitation on the units themselves in some cases, and for example, the detection unit may also be described as a "unit that detects an object displayed in a target picture to obtain object information".
For example, without limitation, exemplary types of hardware logic that may be used include Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex programmable logic devices (CP L D), and so forth.
According to one or more embodiments of the present disclosure, there is provided a picture processing method including: detecting an object displayed in a target picture to obtain object information, wherein the object information comprises range information and object type information used for representing the display range of the object; according to the object information, constructing a score map, wherein the numerical values at the corresponding positions of the display ranges of the objects of different classes in the score map are different; determining a cutting area according to the sum of numerical values at corresponding positions of display ranges of different objects represented by different object category information; and cutting the target picture according to the picture cutting information and the score map to generate a cut picture.
In accordance with one or more embodiments of the present disclosure, the method further comprises: generating a direction score vector associated with a direction, the direction being a first direction or a second direction perpendicular to the first direction, wherein a number of elements of the direction score vector is equal to a number of pixels of the target picture in the direction, and wherein element values of the elements of the direction score vector are generated by summing up in a direction perpendicular to the direction from a position in the score map corresponding to the element.
According to one or more embodiments of the present disclosure, clipping the target picture according to the picture clipping information and the score map to generate a clipped picture includes: determining a direction cutting width related to the direction according to the picture cutting information; extending the cutting area along the direction and the direction opposite to the direction according to the direction cutting width so as to determine a first cutting position and a second cutting position of the target picture in the direction; and cutting the target picture according to the first cutting position and the second cutting position to generate a cut picture.
According to one or more embodiments of the present disclosure, clipping the target picture according to the picture clipping information and the score map to generate a clipped picture includes: determining a direction cutting width related to the direction according to the picture cutting information; determining the first clipping position and the second clipping position according to the direction clipping width and the sum of element values of elements of the direction score vector included in a first clipping position and a second clipping position of the target picture in the direction; and cutting the target picture according to the first cutting position and the second cutting position to generate a cut picture.
According to one or more embodiments of the present disclosure, there is provided a picture processing apparatus including: the image processing device comprises a detection unit, a processing unit and a processing unit, wherein the detection unit is configured to detect an object displayed in a target picture to obtain object information, and the object information comprises range information and object information used for representing the display range of the object; the building unit is configured to build a score map according to the object information; the determining unit is configured to determine a cutting area according to the sum of numerical values at positions corresponding to display ranges of different objects represented by different object category information; and the first generating unit is configured to cut the target picture according to the picture cutting information and the score map to generate a cut picture.
According to one or more embodiments of the present disclosure, a picture processing apparatus further includes: a second generation unit configured to generate a direction score vector relating to a direction, the direction being a first direction or a second direction perpendicular to the first direction, wherein the number of elements of the direction score vector is equal to the number of pixel points of the target picture in the direction, and element values of the elements of the direction score vector are generated by summing up in a direction perpendicular to the direction from a position corresponding to the element in the score map.
According to one or more embodiments of the present disclosure, the first generating unit in the picture processing apparatus is further configured to determine, according to the picture cropping information, a direction cropping width associated with the direction; extending the cutting area along the direction and the direction opposite to the direction according to the direction cutting width so as to determine a first cutting position and a second cutting position of the target picture in the direction; and cutting the target picture according to the first cutting position and the second cutting position to generate a cut picture.
According to one or more embodiments of the present disclosure, a direction cropping width associated with the direction is determined according to the picture cropping information; determining the first clipping position and the second clipping position according to the direction clipping width and the sum of element values of elements of the direction score vector included in a first clipping position and a second clipping position of the target picture in the direction; and cutting the target picture according to the first cutting position and the second cutting position to generate a cut picture.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; a storage device, on which one or more programs are stored, which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any of the embodiments above.
According to one or more embodiments of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, wherein the program, when executed by a processor, implements the method as described in any of the embodiments above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. A picture processing method comprises the following steps:
detecting an object displayed in a target picture to obtain object information, wherein the object information comprises range information and object category information used for representing the display range of the object;
according to the object information, constructing a score map, wherein numerical values at corresponding positions of display ranges of objects of different classes in the score map are different;
determining a cutting area according to the sum of numerical values at corresponding positions of display ranges of different objects represented by different object category information;
and cutting the target picture according to the picture cutting information and the cutting area to generate a cut picture.
2. The method of claim 1, wherein the method further comprises:
generating a direction score vector related to a direction, the direction being a first direction or a second direction perpendicular to the first direction, wherein the number of elements of the direction score vector is equal to the number of pixel points of the target picture in the direction, and element values of elements of the direction score vector are generated by summing up in a direction perpendicular to the direction starting from a position corresponding to the element in the score map.
3. The method according to claim 2, wherein the cropping the target picture according to the picture cropping information and the cropping area to generate a cropped picture, comprises:
determining a direction cutting width related to the direction according to the picture cutting information;
extending the cutting area along the direction and the direction opposite to the direction according to the direction cutting width so as to determine a first cutting position and a second cutting position of the target picture in the direction;
and cutting the target picture according to the first cutting position and the second cutting position to generate a cut picture.
4. The method according to claim 2, wherein the cropping the target picture according to the picture cropping information and the cropping area to generate a cropped picture, comprises:
determining a direction cutting width related to the direction according to the picture cutting information;
determining a first clipping position and a second clipping position according to the direction clipping width and the sum of element values of elements of the direction score vector included in the first clipping position and the second clipping position of the target picture in the direction;
and cutting the target picture according to the first cutting position and the second cutting position to generate a cut picture.
5. A picture processing apparatus comprising:
the detection unit is configured to detect an object displayed in a target picture to obtain object information, wherein the object information comprises range information and object category information used for representing the display range of the object;
a construction unit configured to construct a score map in which numerical values at corresponding positions of display ranges of objects of different categories are different, according to the object information;
the determining unit is configured to determine a cutting area according to the sum of numerical values at positions corresponding to display ranges of different objects represented by different object category information;
and the first generating unit is configured to cut the target picture according to the picture cutting information and the cutting area, and generate a cut picture.
6. The apparatus of claim 4, wherein the apparatus further comprises:
a second generating unit configured to generate a direction score vector relating to a direction, the direction being a first direction or a second direction perpendicular to the first direction, wherein the number of elements of the direction score vector is equal to the number of pixel points of the target picture in the direction, and element values of the elements of the direction score vector are generated by summing up in a direction perpendicular to the direction from a position corresponding to the element in the score map.
7. The apparatus of claim 5, wherein the first generating unit is further configured to:
determining a direction cutting width related to the direction according to the picture cutting information;
extending the cutting area along the direction and the direction opposite to the direction according to the direction cutting width so as to determine a first cutting position and a second cutting position of the target picture in the direction;
and cutting the target picture according to the first cutting position and the second cutting position to generate a cut picture.
8. The apparatus of claim 5, wherein the first generating unit is further configured to:
determining a direction cutting width related to the direction according to the picture cutting information;
determining a first clipping position and a second clipping position according to the direction clipping width and the sum of element values of elements of the direction score vector included in the first clipping position and the second clipping position of the target picture in the direction;
and cutting the target picture according to the first cutting position and the second cutting position to generate a cut picture.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-4.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-4.
CN202010251636.9A 2020-04-01 2020-04-01 Picture processing method and device, electronic equipment and computer readable medium Active CN111461965B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010251636.9A CN111461965B (en) 2020-04-01 2020-04-01 Picture processing method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010251636.9A CN111461965B (en) 2020-04-01 2020-04-01 Picture processing method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN111461965A true CN111461965A (en) 2020-07-28
CN111461965B CN111461965B (en) 2023-03-21

Family

ID=71684357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010251636.9A Active CN111461965B (en) 2020-04-01 2020-04-01 Picture processing method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN111461965B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111916050A (en) * 2020-08-03 2020-11-10 北京字节跳动网络技术有限公司 Speech synthesis method, speech synthesis device, storage medium and electronic equipment
CN113256660A (en) * 2021-06-04 2021-08-13 北京有竹居网络技术有限公司 Picture processing method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504649A (en) * 2014-12-30 2015-04-08 百度在线网络技术(北京)有限公司 Picture cutting method and device
US20160104055A1 (en) * 2014-10-09 2016-04-14 Adobe Systems Incorporated Image Cropping Suggestion Using Multiple Saliency Maps
US20160189343A1 (en) * 2014-12-31 2016-06-30 Yahoo! Inc. Image cropping
WO2016101767A1 (en) * 2014-12-24 2016-06-30 北京奇虎科技有限公司 Picture cropping method and device and image detecting method and device
CN110298380A (en) * 2019-05-22 2019-10-01 北京达佳互联信息技术有限公司 Image processing method, device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160104055A1 (en) * 2014-10-09 2016-04-14 Adobe Systems Incorporated Image Cropping Suggestion Using Multiple Saliency Maps
WO2016101767A1 (en) * 2014-12-24 2016-06-30 北京奇虎科技有限公司 Picture cropping method and device and image detecting method and device
CN104504649A (en) * 2014-12-30 2015-04-08 百度在线网络技术(北京)有限公司 Picture cutting method and device
US20160189343A1 (en) * 2014-12-31 2016-06-30 Yahoo! Inc. Image cropping
CN110298380A (en) * 2019-05-22 2019-10-01 北京达佳互联信息技术有限公司 Image processing method, device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111916050A (en) * 2020-08-03 2020-11-10 北京字节跳动网络技术有限公司 Speech synthesis method, speech synthesis device, storage medium and electronic equipment
CN113256660A (en) * 2021-06-04 2021-08-13 北京有竹居网络技术有限公司 Picture processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN111461965B (en) 2023-03-21

Similar Documents

Publication Publication Date Title
CN113191726B (en) Task detail interface display method, device, equipment and computer readable medium
CN111459364B (en) Icon updating method and device and electronic equipment
CN111461968B (en) Picture processing method, device, electronic equipment and computer readable medium
CN111461967B (en) Picture processing method, device, equipment and computer readable medium
CN115600629B (en) Vehicle information two-dimensional code generation method, electronic device and computer readable medium
CN111190520A (en) Menu item selection method and device, readable medium and electronic equipment
CN111784712A (en) Image processing method, device, equipment and computer readable medium
CN111461965B (en) Picture processing method and device, electronic equipment and computer readable medium
CN111782329A (en) Node dragging method, device, equipment and computer readable medium
CN110851032A (en) Display style adjustment method and device for target device
CN111461969B (en) Method, device, electronic equipment and computer readable medium for processing picture
CN112183388A (en) Image processing method, apparatus, device and medium
CN111461964B (en) Picture processing method, device, electronic equipment and computer readable medium
CN115272760A (en) Small sample smoke image fine classification method suitable for forest fire smoke detection
CN112764629B (en) Augmented reality interface display method, device, equipment and computer readable medium
CN115619904A (en) Image processing method, device and equipment
CN114419298A (en) Virtual object generation method, device, equipment and storage medium
CN111489286B (en) Picture processing method, device, equipment and medium
CN111835917A (en) Method, device and equipment for showing activity range and computer readable medium
CN111815654A (en) Method, apparatus, device and computer readable medium for processing image
CN110991312A (en) Method, apparatus, electronic device, and medium for generating detection information
CN110807164A (en) Automatic image area adjusting method and device, electronic equipment and computer readable storage medium
CN112346630B (en) State determination method, device, equipment and computer readable medium
CN111461966B (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN112884787B (en) Image clipping method and device, readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant