WO2021008456A1 - 图像处理方法、装置、电子设备及存储介质 - Google Patents

图像处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2021008456A1
WO2021008456A1 PCT/CN2020/101341 CN2020101341W WO2021008456A1 WO 2021008456 A1 WO2021008456 A1 WO 2021008456A1 CN 2020101341 W CN2020101341 W CN 2020101341W WO 2021008456 A1 WO2021008456 A1 WO 2021008456A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
area
target image
position point
image
Prior art date
Application number
PCT/CN2020/101341
Other languages
English (en)
French (fr)
Inventor
李马丁
郑云飞
于冰
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2021008456A1 publication Critical patent/WO2021008456A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the field of information processing technology, and in particular to an image processing method, device, electronic equipment, and storage medium.
  • the target image In the user's life and work, it is often necessary to crop the target image to suit various display needs.
  • One of the scenarios is: uploading the target image to an app (application, application), such as WeChat, etc., and displaying it as the avatar of the application.
  • the shape of the target image does not meet the requirements of the application.
  • the target image is not a square.
  • the avatar When uploading the target image to the application as an avatar, the avatar is often required to be a square. Therefore, the target image needs to be square. Make cropping.
  • the first cropping method is: the user manually crops by himself. This cropping method is more time-consuming and inefficient, and cannot crop a large number of target images; the second The cropping method is: directly crop the periphery of the target image, and only retain the center area of the target image for display. This cropping method is easy to crop the area that the user needs to keep, and the cropping effect is poor.
  • the present disclosure provides an image processing method, device, electronic device, and storage medium, so as to at least solve the problems of long time-consuming, low efficiency, and poor cropping effect in cropping of target images in related technologies.
  • the technical solutions of the present disclosure are as follows:
  • an image processing method including:
  • the target image is cropped according to the first position point and cropping range information, and the cropping range information is used to indicate the size of an area that needs to be reserved in the target image.
  • the step of cropping the target image according to the first position point and the cropping range information includes:
  • a second position point that satisfies the target condition is determined in the target image, so that the first position point and the first The two position points are separated by a preset distance, and the preset distance is greater than 0;
  • the target image is cropped according to the second position point and the cropping range information.
  • the step of cropping the target image according to the second position point and the cropping range information includes:
  • the step of cropping the target image according to the second position point and the cropping range information includes:
  • the step of detecting the target area in the target image includes:
  • a salient area in the target image is identified, and the salient area is determined as the target area.
  • the step of detecting the target area in the target image includes:
  • the step of detecting the target area in the target image includes:
  • the target object is filtered out from the at least one object according to a preset condition, and the area included in the detection frame corresponding to the target object is determined as the target area.
  • the target image includes a plurality of objects
  • the step of cropping the target image according to the first position point and the cropping range information includes:
  • the position of the detection frame corresponding to the target object and the position of the detection frame corresponding to objects other than the target object adjust at least one of the first position point and the size indicated by the cropping range information .
  • the step of determining a first position point in the target image according to the target area includes any one of the following:
  • an image processing device including:
  • the target area detection module is configured to detect the target area in the target image
  • the position point determination module is configured to determine a first position point in the target image according to the target area, and the first position point is used to indicate information related to the detection method of the target area in the target area Location point
  • the target image cropping module is configured to crop the target image according to the first position point and cropping range information, where the cropping range information is used to indicate the size of an area that needs to be reserved in the target image.
  • the target image cropping module includes:
  • An area ratio detection sub-module configured to detect the area ratio of the target area in the target image
  • a first determining sub-module configured to determine the first location point as the second location point when the area ratio of the target area in the target image is greater than or equal to a set threshold
  • the second determining submodule is configured to determine a second position point in the target image that satisfies the target condition when the area ratio of the target region in the target image is less than the set threshold, so that The first position point and the second position point are separated by a preset distance, and the preset distance is greater than 0;
  • the target image cropping submodule is configured to crop the target image according to the second position point and the cropping range information.
  • the target image cropping sub-module is configured to perform enlargement processing on the target image according to a target enlargement ratio, and the target enlargement ratio is determined by the area ratio of the target area in the target image; Using the second position point as a center, crop the enlarged target image according to the size indicated by the cropping range information.
  • the target image cropping submodule is configured to adjust the size indicated by the cropping range information according to a target reduction ratio, and the target reduction ratio is determined by the area of the target area in the target image. Determine; take the second position as the center and crop the target image according to the adjusted size indicated by the cropping range information.
  • the target area detection module includes:
  • An object detection sub-module configured to perform object detection on the target image
  • the target object screening submodule is configured to, when at least one object is detected in the target image, filter out the target object from the at least one object according to a preset condition, and include the detection frame corresponding to the target object The area is determined as the target area;
  • the salient area recognition sub-module is configured to identify a salient area in the target image when no object is detected in the target image, and determine the salient area as a target area.
  • the target area detection module includes:
  • the salient area identification sub-module is configured to identify the salient area in the target image and determine the salient area as the target area.
  • the target area detection module includes:
  • the object detection sub-module is configured to perform object detection on the target image to obtain at least one object
  • the target object screening sub-module is configured to filter out a target object from the at least one object according to a preset condition, and determine an area included in a detection frame corresponding to the target object as a target area.
  • the target image includes a plurality of objects
  • the target image cropping module is configured to be based on the position of the detection frame corresponding to the target object and the position of the detection frame corresponding to objects other than the target object , Adjusting at least one of the first position point and the size indicated by the cropping range information.
  • the position point determination module is configured to perform any one of the following steps:
  • an electronic device including:
  • a memory for storing executable program codes of the processor
  • the processor is configured to execute:
  • the target image is cropped according to the first position point and cropping range information, where the cropping range information is used to indicate the size of an area that needs to be reserved in the target image.
  • the processor is further configured to execute:
  • a second position point that satisfies the target condition is determined in the target image, so that the first position point and the first The two position points are separated by a preset distance, and the preset distance is greater than 0;
  • the target image is cropped according to the second position point and the cropping range information.
  • the processor is further configured to execute:
  • the processor is further configured to execute:
  • the processor is further configured to execute:
  • a salient area in the target image is identified, and the salient area is determined as the target area.
  • the processor is further configured to execute:
  • the processor is further configured to execute:
  • the target object is filtered out from the at least one object according to a preset condition, and the area included in the detection frame corresponding to the target object is determined as the target area.
  • the processor is further configured to execute:
  • the position of the detection frame corresponding to the target object and the position of the detection frame corresponding to objects other than the target object adjust at least one of the first position point and the size indicated by the cropping range information .
  • the processor is further configured to execute any of the following steps:
  • the program code realizes the above-mentioned image processing method.
  • a storage medium which when the program code in the storage medium is executed by a processor of an electronic device, enables the electronic device to execute the above-mentioned image processing method.
  • a computer program product When the program code in the computer program product is executed by a processor of an electronic device, the electronic device can execute the above-mentioned image processing method.
  • the user may be interested in points, and the first position point is used to represent
  • the target image is combined with the cropping range information of the size of the area that needs to be retained, and the target image is automatically cropped, so that the cropped target image includes the area that the user needs to retain, and the user no longer needs to manually crop it, and the cropping effect is better. Meet the needs of users.
  • Fig. 1 is a flowchart showing an image processing method according to an exemplary embodiment
  • Fig. 2 is a flow chart showing another image processing method according to an exemplary embodiment
  • Fig. 3 is a schematic diagram showing cropping of a target image according to an exemplary embodiment
  • Fig. 4 is a block diagram showing an image processing device according to an exemplary embodiment
  • Fig. 5 is a block diagram showing another image processing device according to an exemplary embodiment
  • Fig. 6 is a block diagram showing a terminal according to an exemplary embodiment
  • Fig. 7 is a block diagram showing a server according to an exemplary embodiment.
  • the embodiments of the present disclosure are applied to various scenes where the target image needs to be cropped.
  • the target image needs to be cropped into a square as the avatar for display; or, a large number of target images Upload to a designated application for arrangement and display.
  • Each target image to be displayed needs to be cut into a desired shape.
  • the sharing link includes text and target image, and the target image in the sharing link also needs to be cropped.
  • Fig. 1 is a flow chart showing an image processing method according to an exemplary embodiment. As shown in Fig. 1, the method may include the following steps:
  • step S101 a target area in the target image is detected.
  • the target image refers to the image to be cropped.
  • the target image may actually be a video image or a static image, etc., and the target image is detected to identify the target area in the target image.
  • the target area may be an area where the target object in the target image is located or a salient area in the target image.
  • Faster-RCNN algorithm Faster Regions with CNN features, fast CNN feature region extraction), SSD algorithm (Single Shot MultiBox Detector, single shot multi-frame prediction), etc.
  • the area where the target object in the target image is located that is, the area included in the detection frame corresponding to the target object is used as the target area.
  • the saliency detection is a method of calculating the saliency of the image by analyzing the characteristics of the image color, intensity, and direction to generate the image
  • the saliency map technology is a grayscale image that is the same size as the original image (ie, the target image), or is scaled down. Each pixel in the grayscale image uses a specific grayscale image. It is expressed by the degree value, and different gray values indicate different degrees of saliency.
  • the salient area and the insignificant area in the target image can be distinguished according to the gray value, and the salient area is determined as the target area.
  • step S102 a first position point is determined in the target image according to the target area, and the first position point is used to indicate a position point in the target area that is related to the detection method of the target area.
  • the first position point related to the detection method of the target area can be determined in the target image.
  • the first location point may be the center of the target area; in response to the target area being obtained based on saliency detection, the first location may be the center of gravity of the target area.
  • the first position point may also be any object feature point in the target area.
  • the target area is the area where the face is located, that is, the detection frame corresponding to the face includes
  • the object feature points can be the eyes or nose in the face, and the eyes or nose in the face are taken as the first location point.
  • the first location point may be the center location point of the area that the user pays attention to, and the first location point may also be called the attention center point.
  • the first position point can be used as the center position during cropping, that is, the cropping center; the cropping center can also be determined according to the center of interest, that is, the first position point. That is, the cutting center can be flexibly set according to the center of attention point, and the cutting center may coincide with the center of attention point or not.
  • the area ratio of the target area in the target image can be used to determine whether the cropping center coincides with the focus center point.
  • the cropping range information is used to identify the size of the area that needs to be reserved in the target image.
  • the cropping range information is used To determine the size of the area that needs to be retained in the target image.
  • the area that needs to be reserved can be indicated by a cropping frame, which can be determined by the first position point and the cropping range information, and is used to show the user the area reserved after cropping.
  • the area within the cropping frame is the area that needs to be reserved in the target image, that is, the area reserved after cropping, and the area outside the cropping frame is the area that is cropped in the target image.
  • step S103 the target image is cropped according to the first position point and the cropping range information.
  • the specific position and size of the cropping frame can be determined according to the cropping center, that is, the first position point and cropping range information, and then the cropping frame is used to crop the target image, so that the cropped target image includes the user's needs The reserved area, the cropping effect is more in line with the needs of users.
  • the shape of the cropping frame can be a square, a circle, a regular hexagon, etc.
  • the shape of the target image after using the cropping frame is also a square, a circle, a regular hexagon, etc. in sequence.
  • the first position point is combined with the cropping range information used to indicate the size of the area that needs to be reserved in the target image, and the target image is automatically cropped, so that the cropped target image includes the area that the user needs to retain, and the user no longer It needs to be cut manually, and the cutting effect is more in line with the needs of users.
  • Fig. 2 is a flow chart showing another image processing method according to an exemplary embodiment. As shown in Fig. 2, the method may include the following steps:
  • step S201 object detection is performed on the target image.
  • image detection technology can be used to detect objects in the target image and identify objects in the target image.
  • the image detection technology can use the Faster-RCNN algorithm (Faster Regions with CNN features, fast CNN feature region extraction), The SSD algorithm (Single Shot MultiBox Detector, single shot multi-box prediction) etc., is not limited in the embodiment of the present disclosure.
  • the objects in the target image can include multiple types, such as human faces, animals, plants, etc.
  • the detected objects can also be as fine as the feature points of the human face.
  • the target image includes Little Boy 01, Little Girl 02, and Seesaw 03.
  • the image detection technology is used to detect three objects in the target image, namely, the face of Little Boy 01, the face of Little Girl 02, and the seesaw.
  • step S202 when at least one object is detected in the target image, the target object is filtered out from the at least one object according to preset conditions, and the area where the target object is located, that is, the detection frame corresponding to the target object includes Area, determined as the target area.
  • one or more detection frames are output.
  • the detection frames are usually rectangular, and may also be referred to as rectangular frames.
  • the rectangular frame is used to frame the object in the target image, and at least one object that the user needs to keep most is selected as the target object according to preset conditions, that is, the area included in the rectangular frame corresponding to the filtered target object is the target The area where the object is located is the target area.
  • the detected target image includes 3 objects, namely the face of little boy 01, the face of little girl 02, and the fulcrum of seesaw 03.
  • the corresponding 3 rectangular frames are rectangular frame S1 and rectangular frame.
  • S2 and rectangular frame S3 rectangular frame S1 is used to frame the face of little boy 01
  • rectangular frame S2 is used to frame the face of little girl 02
  • rectangular frame S3 is used to frame the fulcrum of seesaw 03; when preset conditions
  • select the object with the largest area of the corresponding rectangular frame select the object corresponding to the rectangular frame S3 with the largest area from the three objects, that is, the fulcrum of the seesaw 03 as the target object, and include the rectangular frame S3
  • the area is determined as the target area.
  • the target image includes 5 objects, namely face 1, face 2, face 3, face 4 and Face 5, face 1, face 2, and face 3 are relatively close, face 4 is far away from face 1, face 2, face 3, and face 5, face 5 is closer to face 1. Face 2 and face 3 are also far apart. Therefore, select any object from face 1, face 2 and face 3 as the target object. If face 2 is selected as the target object, then face The area where 2 is located, that is, the area included in the rectangular frame corresponding to face 2 is used as the target area.
  • step S203 when no object is detected in the target image, a salient area in the target image is identified, and the salient area is determined as the target area.
  • the object when object detection is performed on the target image, the object may not be detected, that is, the target object is not detected in the target image. At this time, it is necessary to perform saliency detection on the target image to identify the target The salient area and non- salient area in the image are determined, and the salient area is determined as the target area.
  • the above step S203 may include the following steps A1 and A2:
  • step A1 the target image is converted into a grayscale image
  • step A2 the area in the grayscale image whose grayscale value is within the preset grayscale range is determined as a salient area.
  • the target image includes multiple pixels, get the R (red) value, G (green) value and B (blue) value of each pixel, according to the R value, G value and B value of each pixel, according to the preset conversion
  • the formula is used to obtain the gray value of the corresponding pixel, and display it according to the calculated gray value.
  • the target image can be converted into a gray image; the gray value range can be preset, and the gray value in the gray image is in the preset
  • the area within the grayscale range is determined as the salient area, and the area with the gray value outside the grayscale range in the grayscale image is determined as the insignificant area.
  • the salient area refers to the area in the target image that can attract more visual attention.
  • the preset grayscale range can be 200 to 255.
  • the grayscale value of a pixel in the grayscale image is 230, it is determined as the pixel in the salient area.
  • the grayscale value of another pixel in the grayscale image When the value is 50, it is determined as the pixels in the non-saliency area, and finally the pixels located in the salient area are counted to obtain the salient area in the target image.
  • a grayscale threshold can be set, and the grayscale value in the grayscale image is greater than
  • the gray value of the pixel with the gray threshold is set to 255
  • the gray value of the pixel in the gray image with a gray value less than the gray threshold is set to 0, so that the gray value of the pixel in the gray image is 0 or 255
  • the entire grayscale image has only two visual effects, black and white, and the area where all pixels with grayscale values of 0 or 255 are located can be determined as the salient area.
  • an area where all pixels with a gray value of 255 are located can be determined as a salient area, and an area where all pixels with a gray value of 0 are located as a non-saliency area.
  • object detection may not be performed on the target image, but saliency detection is directly performed to identify the salient area in the target image, and determine the salient area as the target area.
  • the target area in the target image it is also possible to perform object detection on the target image without performing saliency detection, that is, perform object detection on the target image to obtain at least one object, and follow the preset conditions from The target object is filtered out of at least one object, and the area included in the detection frame corresponding to the target object is determined as the target area.
  • step S204 a first position point is determined in the target image according to the target area.
  • the first position point related to the detection method of the target area can be determined in the target image.
  • the first location point may be the center location point of the area that the user pays attention to, and the first location point may also be referred to as the attention center point.
  • the above step S204 may include the following steps A3 or A4:
  • step A3 the center or center of gravity of the target area is determined as the center of interest in the target image.
  • the first location point In response to the target area being obtained based on object detection, the first location point may be the center of the target area; in response to the target area being obtained based on saliency detection, the first location may be the center of gravity of the target area.
  • the target area is the area where the target object is located
  • the area where the target object is located that is, the center of the area included in the detection frame corresponding to the target object is determined as the center of attention
  • the target area is a salient area
  • the center of gravity is the center of attention.
  • step A4 any object feature point in the target area is determined as the focus center point of the target image.
  • At least one object feature point of the target area may be detected, and any object feature point of all the object feature points in the target area is determined as the focus center point of the target image.
  • the face of little boy 01 is determined as the target object, and the area where the face of little boy 01 is located, that is, the rectangular frame S1 is determined as the target area, and the object feature points in the target area S1 include little boy 01
  • the object feature point S11 in the target area S1 that is, the nose of the little boy 01 is determined as the center of attention.
  • step S205 the area proportion of the target area in the target image is detected.
  • the target area is the area S1 where the face of the little boy 01 is located
  • the area S1 where the face of the little boy 01 is detected is 4 mm 2
  • the area of the target image is 120 mm 2
  • the face of the little boy 01 is located
  • the area S1 in the target image takes up 1/30.
  • step S206 when the area ratio of the target area in the target image is greater than or equal to the set threshold, the first position point is determined as the second position point, and the second position point is used to indicate the center of the area that needs to be reserved position.
  • the focus center point that is, the first position point
  • the focus center point can be directly determined to be retained when cropping
  • the center position of the area that is, the second position point. Since the area reserved during the cropping can be indicated by the cropping frame, the second position point can also be referred to as the cropping center of the cropping frame. Subsequently, a complete target area can be directly cropped based on the second position point.
  • the set threshold can be manually set, or it can be determined according to the ratio between the area of the cropping frame and the area of the target image; at this time, the focus center point and the cropping center coincide, that is, the first position point and the first position point. The two positions coincide.
  • the threshold is set to 1/50, in Figure 3, the area S1 (that is, the target area) where the boy 01’s face is located in the target image accounts for 1/30, then it is determined that the target area S1 is in the target image The area ratio in is greater than the set threshold, and the nose S11 of the boy 01 in the target area S1 is the center of attention. Accordingly, the nose S11 of the boy 01 is determined as the second position point, that is, the displayed clipping frame The center is cropped so that the entire face area of the little boy 01 can be directly cropped based on the nose S11 in the future.
  • step S207 when the area ratio of the target area in the target image is less than the set threshold, a second position point that satisfies the target condition is determined in the target image, so that the first position point and the second position point are separated from each other. Set the distance, the preset distance is greater than 0.
  • the position of the center of interest is set at the specified position of the displayed crop frame, so that the crop center of the crop frame is separated from the center of attention by a preset distance, and the preset distance is greater than zero.
  • first location point and the second location point may be on the horizontal or vertical dividing line of the crop frame, and the second location point may be determined based on the horizontal or vertical dividing line and the preset distance; For the first location point, the second location point is determined by a vector whose length is a preset distance.
  • the center of attention can be set at the specified position of the displayed cropping frame, such as in cropping.
  • the preset distance can be set manually, the preset distance is not 0, and the preset distance is greater than 0; at this time, the cutting center and the focus center point do not overlap, and the cutting center and the center point are separated by a preset distance, which is the first position The point and the second location point are separated by a preset distance.
  • the target area is the area where the face in the target image is located. If the area of the face in the target image is less than the set threshold, the center of attention is set in the middle and upper part of the displayed cropping frame, so that subsequent cropping can be performed. The face included in the frame and the upper body of the character are cropped out.
  • step S208 when the area ratio of the target area in the target image is greater than or equal to the set threshold, the target image is cropped according to the second position point and the cropping range information, and the end is completed.
  • the area ratio of the target area in the target image is greater than or equal to the set threshold, it is determined that the area of the target area in the target image is larger, and there is no need to process the target image. It is only necessary to correct The target image is cropped.
  • the size indicated by the cropping range information can be fixed.
  • the size is the size of the displayed cropping frame.
  • the size of the cropping frame can be larger than the size of the target area.
  • the complete target is included in the cropping frame.
  • the cropping frame may also include other areas around the target area, so that the area within the cropping frame is the largest area that meets the size of the cropping frame.
  • the nose S11 of the little boy 01 is the center of attention, and the center of attention is the cropping center of the crop frame.
  • the area S1 (that is, the target area) where the face of the little boy 01 is located in the target image is 1/30, if the threshold is set to 1/50, when it is determined that the area of the target area S1 in the target image is greater than the set threshold, the display position of the cropping frame is determined according to the second position and cropping range information, and the displayed
  • the cropping frame is shown as N in Figure 3.
  • the area in cropping frame N includes not only the area S1 where the face of boy 01 is located, but also the body parts of boy 01 and part of the seesaw 03. After the image is cropped, the area in the crop frame N can be obtained.
  • step S209 when the area ratio of the target area in the target image is less than the set threshold, the target image is enlarged according to the target enlargement ratio, which is determined by the area ratio of the target area in the target image.
  • the target image needs to be enlarged according to the target enlargement ratio.
  • the short side of the enlarged target image Will be larger than the size of the crop frame, that is, the crop frame is all inside the target image.
  • the area of the target area in the target image is also increased, so that the target image for subsequent cropping can include more target areas that the user needs to keep.
  • step A5 may also be included before the above step S209:
  • step A5 the target magnification ratio is determined according to the area ratio of the target area in the target image.
  • the target magnification ratio Before performing magnification processing on the target image according to the target magnification ratio, it is also necessary to determine the target magnification ratio according to the area ratio of the target area in the target image.
  • the target magnification ratio is determined according to the area ratio of the target object in the target image; when the object is not detected in the target image, according to the area of the salient area in the target image Ratio, determine the target magnification ratio.
  • the area ratio of the target area in the target image is inversely proportional to the target magnification ratio.
  • the corresponding target magnification ratio is larger; when the area ratio of the target area in the target image The larger the value, the smaller the corresponding target enlargement ratio.
  • the target area when the area ratio of the target area in the target image exceeds the preset upper limit, if the crop frame is used to crop the target image directly, the target area may be cropped into two parts, for example, The target area is the area where the face is located.
  • the preset reduction ratio can be determined according to the area ratio of the target area in the target image, and the target image can be reduced according to the preset reduction ratio. Then, according to the target area in the reduced target image and the size of the crop frame , Determine the cropping range of the cropping frame, so that when the cropping frame is used to crop the reduced target image, the complete target area can be cropped.
  • step S210 the enlarged target image is cropped according to the size indicated by the cropping range information with the second position point as the center.
  • the enlarged target image is cropped with the second position point as the center and the size indicated by the cropping range information.
  • the size of the cropped area in the target image can be determined. Since the target area is also enlarged, most of the area in the displayed cropping frame is the enlarged target area, that is, the target area is being cropped. The area in the frame is relatively large, and a small part of the area outside the target area can be located in the crop frame, so that the target image cropped by the crop frame can include more target areas that the user needs to keep, and because of the target in the crop frame After the area is enlarged, the cropped target area will be clearer.
  • the size indicated by the cropping range information may be adjusted according to the target reduction ratio, and the target reduction ratio is determined by the area ratio of the target area in the target image. With the second position as the center, the target image is cropped according to the size indicated by the adjusted cropping range information.
  • the size indicated by the cropping range information is variable.
  • the size indicated by the cropping range information may be a fixed frame ratio, such as 1:1, 3:4, and 16:9.
  • the size of the cropping can be adjusted according to the frame ratio, that is, the size of the displayed cropping frame can be adjusted so that the longest side of the cropping frame is not larger than the shortest side of the target image, so that the entire cropping frame is in the target image, and The shortest side of the cropping frame is not less than the shortest side of the target area, so that the entire target area is in the cropping frame.
  • the cropping frame also includes other areas around the target area, so that the area within the cropping frame is the border that satisfies the cropping frame The largest area of the scale.
  • step S202 when step S202 is performed, if it is determined that the target image includes multiple objects, after step S206 or step S207, the following step A6 may be further included:
  • step A6 according to the position of the detection frame corresponding to the target object and the position of the detection frame corresponding to objects other than the target object, at least one of the first position point and the size indicated by the cropping range information is adjusted.
  • the size indicated by the cropping range information can be fixed.
  • the position of the detection frame corresponding to the target object and the removal of the target object The position of the detection frame corresponding to the other objects outside, the target image is reduced, and the reduced target image is cropped based on the first position point and the cropping range information, so that no one of the reduced target images is cropped.
  • Object segmentation means that the area within the cropping range of the cropping frame includes the complete object.
  • the target image includes 3 human faces, the area where the middle face is located is determined as the target area, and the first position point is the nose of the middle face. If the positions of the other two faces are not considered, the first position point and For the size indicated by the cropping range information, when cropping the exact target image, it is possible to set part of the area of the remaining two faces within the cropped area, that is, when cropping, the remaining two faces will be divided. This cropping method does not meet user needs. At this time, you need to consider the positions of the remaining two faces so that the cropping frame does not separate the remaining two faces.
  • the correct The target image is reduced so that the area where the three faces are located can be located within the cropped area, so that the cropping frame cuts out all three faces.
  • the cropped target image includes more objects, and the cropping range information can also be adjusted.
  • the indicated size is reduced so that the face in the middle is located in the cropped area, and the other faces are not located in the area.
  • the size indicated by the cropping range information is variable.
  • the size indicated by the cropping range information may be a fixed frame ratio, such as 1:1, 3:4, and 16:9.
  • the cropping size can be reduced according to the frame ratio, that is, the size of the displayed cropping frame can be adjusted, and the target image can be cropped based on the size indicated by the first position point and the adjusted cropping range information, so that the target image is not cropped. Any object in the image is divided so that the area within the cropping range of the cropping frame includes the complete object.
  • the technical solution provided by the embodiments of the present disclosure brings at least the following beneficial effects: by detecting the target area in the target image, and determining the first position point related to the detection method of the target area in the target image according to the target area, the user can be obtained For points that may be of interest, the first position point is combined with the cropping range information used to indicate the size of the area that needs to be retained in the target image, and the target image is automatically cropped, so that the cropped target image includes what the user needs to keep Area, the user no longer needs to manually crop it, and the cropping effect is more in line with the user’s needs; and considering the area ratio of the target area in the target image, when the target area is less than the set threshold, the target image is enlarged By processing, the target image cropped by the cropping frame can include more target areas that the user needs to keep, and the cropped target area is clearer.
  • Fig. 4 is a block diagram showing an image processing device according to an exemplary embodiment. 4, the image processing device 400 includes a target area detection module 401, a position point determination module 402, and a target image cropping module 403.
  • the image processing device 400 includes a target area detection module 401, a position point determination module 402, and a target image cropping module 403.
  • the target area detection module 401 is configured to detect the target area in the target image
  • the location point determination module 402 is configured to determine a first location point in the target image according to the target area, and the first location point is used to indicate a location point in the target area that is related to the detection method of the target area;
  • the target image cropping module 403 is configured to crop the target image according to the first position point and cropping range information, where the cropping range information is used to indicate the size of an area that needs to be reserved in the target image.
  • the target image cropping module 403 includes:
  • the area ratio detection sub-module 4031 is configured to detect the area ratio of the target area in the target image
  • the first determining sub-module 4032 is configured to determine the first location point as the second location point when the area ratio of the target area in the target image is greater than or equal to a set threshold;
  • the second determining sub-module 4033 is configured to determine a second position point in the target image that satisfies the target condition when the area ratio of the target area in the target image is less than the set threshold, so that the first position The point is separated from the second position point by a preset distance, and the preset distance is greater than 0;
  • the target image cropping sub-module 4034 is configured to crop the target image according to the second position point and the cropping range information.
  • the target image cropping sub-module 4034 is configured to enlarge the target image according to a target enlargement ratio, and the target enlargement ratio is determined by the area ratio of the target area in the target image ; With the second position point as the center, crop the enlarged target image according to the size indicated by the cropping range information.
  • the target image cropping submodule 4034 is configured to adjust the size indicated by the cropping range information according to a target reduction ratio, and the target reduction ratio is determined by the area of the target area in the target image. The ratio is determined; with the second position as the center, the target image is cropped according to the size indicated by the adjusted cropping range information.
  • the target area detection module 401 includes:
  • the object detection sub-module 4011 is configured to perform object detection on the target image
  • the target object screening submodule 4012 is configured to, when at least one object is detected in the target image, filter out the target object from the at least one object according to a preset condition, and determine the area included in the detection frame corresponding to the target object Is the target area;
  • the salient area recognition sub-module 4013 is configured to identify a salient area in the target image when no object is detected in the target image, and determine the salient area as the target area.
  • the target area detection module 401 includes:
  • the salient area identification sub-module 4013 is configured to identify the salient area in the target image and determine the salient area as the target area.
  • the target area detection module 401 includes:
  • the object detection sub-module 4011 is configured to perform object detection on the target image to obtain at least one object
  • the target object screening sub-module 4012 is configured to filter out the target object from the at least one object according to a preset condition, and determine the area included in the detection frame corresponding to the target object as the target area.
  • the target image includes a plurality of objects
  • the target image cropping module 403 is configured to detect the position of the detection frame corresponding to the target object and the detection frame corresponding to other objects except the target object. Adjust at least one of the first position point and the size indicated by the clipping range information.
  • the position point determination module 402 is configured to perform any one of the following steps:
  • the center of gravity of the target area is determined as the first position point.
  • the user By detecting the target area in the target image, and determining the first position point related to the detection method of the target area according to the target area, the user may be interested in the point, and the first position point is used to represent the target image
  • the cropping range information of the size of the area that needs to be retained is combined with the cropping range information to automatically crop the target image, so that the cropped target image includes the area that the user needs to retain. The user no longer needs to manually crop it, and the cropping effect is more in line with the user Demand.
  • an electronic device a processor; a memory for storing executable program codes of the processor; wherein the processor is configured to execute:
  • the target area determine a first position point in the target image, where the first position point is used to indicate a position point in the target area that is related to the detection method of the target area;
  • the target image is cropped according to the first position point and the cropping range information, and the cropping range information is used to indicate the size of the area that needs to be reserved in the target image.
  • the processor is further configured to execute:
  • the first position point is determined as the second position point
  • a second position point that satisfies the target condition is determined in the target image, so that the first position point and the second position point have a preset interval Distance, the preset distance is greater than 0;
  • the target image is cropped according to the second position point and the cropping range information.
  • the processor is further configured to execute:
  • the enlarged target image is cropped according to the size indicated by the cropping range information.
  • the processor is further configured to execute:
  • the target image is cropped according to the size indicated by the adjusted cropping range information.
  • the processor is further configured to execute:
  • the target object is selected from the at least one object according to a preset condition, and the area included in the detection frame corresponding to the target object is determined as the target area;
  • a salient area in the target image is identified, and the salient area is determined as the target area.
  • the processor is further configured to execute:
  • the processor is further configured to execute:
  • the target object is filtered out from the at least one object according to a preset condition, and the area included in the detection frame corresponding to the target object is determined as the target area.
  • the processor is further configured to execute:
  • the processor is further configured to execute any of the following steps:
  • the center of gravity of the target area is determined as the first position point.
  • the electronic device can be provided as a terminal or a server.
  • the terminal can implement the operations performed by the image processing method
  • the server can be To implement the operations performed by the image processing method, the server can receive the target image sent by the terminal, and the server can process the target image based on the received target image; the server and the terminal can also interact to implement the operations performed by the image processing method; or The terminal sends an image processing request to the server, and the server performs image processing, and then feeds back the result of the image processing to the terminal, and the terminal outputs the result of the image processing.
  • FIG. 6 is a block diagram showing a terminal 600 according to an exemplary embodiment.
  • the terminal 600 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, moving picture expert compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, moving picture expert compressing standard audio Level 4) Player, laptop or desktop computer.
  • the terminal 600 may also be called user equipment, portable terminal, laptop terminal, desktop terminal and other names.
  • the terminal 600 includes a processor 601 and a memory 602.
  • the processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 601 can adopt at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array, Programmable Logic Array). achieve.
  • the processor 601 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the wake state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor used to process data in the standby state.
  • the processor 601 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is responsible for rendering and drawing content that needs to be displayed on the display screen.
  • the processor 601 may further include an AI (Artificial Intelligence) processor, and the AI processor is used to process calculation operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 602 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 602 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 602 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 601 to implement the image processing provided by the method embodiments of the present disclosure. method.
  • the terminal 600 may optionally further include: a peripheral device interface 603 and at least one peripheral device.
  • the processor 601, the memory 602, and the peripheral device interface 603 may be connected by a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 603 through a bus, a signal line or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 604, a display screen 605, a camera component 606, an audio circuit 607, a positioning component 608, and a power supply 609.
  • the peripheral device interface 603 can be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 601 and the memory 602.
  • the processor 601, the memory 602, and the peripheral device interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 601, the memory 602, and the peripheral device interface 603 or The two can be implemented on separate chips or circuit boards, which are not limited in this embodiment.
  • the radio frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 604 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 604 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit 604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on.
  • the radio frequency circuit 604 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: metropolitan area network, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area network and/or WiFi (Wireless Fidelity, wireless fidelity) network.
  • the radio frequency circuit 604 may also include NFC (Near Field Communication) related circuits, which is not limited in the present disclosure.
  • the display screen 605 is used to display UI (User Interface).
  • the UI can include graphics, text, icons, videos, and any combination thereof.
  • the display screen 605 also has the ability to collect touch signals on or above the surface of the display screen 605.
  • the touch signal can be input to the processor 601 as a control signal for processing.
  • the display screen 605 may also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • the display screen 605 there may be one display screen 605, which is provided with the front panel of the terminal 600; in other embodiments, there may be at least two display screens 605, which are respectively provided on different surfaces of the terminal 600 or in a folding design; In still other embodiments, the display screen 605 may be a flexible display screen, which is disposed on the curved surface or the folding surface of the terminal 600. Furthermore, the display screen 605 can also be set as a non-rectangular irregular pattern, that is, a special-shaped screen.
  • the display screen 605 may be made of materials such as LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode, organic light-emitting diode).
  • the camera assembly 606 is used to capture images or videos.
  • the camera assembly 606 includes a front camera and a rear camera.
  • the front camera is set on the front panel of the terminal, and the rear camera is set on the back of the terminal.
  • there are at least two rear cameras each of which is a main camera, a depth-of-field camera, a wide-angle camera, and a telephoto camera, to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function, Integrate with the wide-angle camera to realize panoramic shooting and VR (Virtual Reality) shooting function or other fusion shooting functions.
  • the camera assembly 606 may also include a flash.
  • the flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
  • the audio circuit 607 may include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 601 for processing, or input to the radio frequency circuit 604 to implement voice communication. For the purpose of stereo collection or noise reduction, there may be multiple microphones, which are respectively set in different parts of the terminal 600.
  • the microphone can also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is used to convert the electrical signal from the processor 601 or the radio frequency circuit 604 into sound waves.
  • the speaker can be a traditional membrane speaker or a piezoelectric ceramic speaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert the electrical signal into human audible sound waves, but also convert the electrical signal into human inaudible sound waves for purposes such as distance measurement.
  • the audio circuit 607 may also include a headphone jack.
  • the positioning component 608 is used to locate the current geographic location of the terminal 600 to implement navigation or LBS (Location Based Service, location-based service).
  • the positioning component 608 may be a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China, the Granus system of Russia, or the Galileo system of the European Union.
  • the power supply 609 is used to supply power to various components in the terminal 600.
  • the power source 609 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
  • the rechargeable battery may support wired charging or wireless charging.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal 600 further includes one or more sensors 610.
  • the one or more sensors 610 include, but are not limited to: an acceleration sensor 611, a gyroscope sensor 612, a pressure sensor 613, a fingerprint sensor 614, an optical sensor 615, and a proximity sensor 616.
  • the acceleration sensor 611 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 600.
  • the acceleration sensor 611 can be used to detect the components of gravitational acceleration on three coordinate axes.
  • the processor 601 can control the display screen 605 to display the user interface in a horizontal view or a vertical view according to the gravity acceleration signal collected by the acceleration sensor 611.
  • the acceleration sensor 611 may also be used for the collection of game or user motion data.
  • the gyroscope sensor 612 can detect the body direction and rotation angle of the terminal 600, and the gyroscope sensor 612 can cooperate with the acceleration sensor 611 to collect the user's 3D actions on the terminal 600.
  • the processor 601 can implement the following functions according to the data collected by the gyroscope sensor 612: motion sensing (for example, changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 613 may be disposed on the side frame of the terminal 600 and/or the lower layer of the display screen 605.
  • the processor 601 performs left and right hand recognition or quick operation according to the holding signal collected by the pressure sensor 613.
  • the processor 601 controls the operability controls on the UI interface according to the user's pressure operation on the display screen 605.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 614 is used to collect the user's fingerprint, and the processor 601 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the user's identity according to the collected fingerprint. When the user's identity is recognized as a trusted identity, the processor 601 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
  • the fingerprint sensor 614 may be provided on the front, back or side of the terminal 600. When a physical button or a manufacturer logo is provided on the terminal 600, the fingerprint sensor 614 can be integrated with the physical button or the manufacturer logo.
  • the optical sensor 615 is used to collect the ambient light intensity.
  • the processor 601 may control the display brightness of the display screen 605 according to the ambient light intensity collected by the optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the display screen 605 is increased; when the ambient light intensity is low, the display brightness of the display screen 605 is decreased.
  • the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
  • the proximity sensor 616 also called a distance sensor, is usually arranged on the front panel of the terminal 600.
  • the proximity sensor 616 is used to collect the distance between the user and the front of the terminal 600.
  • the processor 601 controls the display screen 605 to switch from the on-screen state to the off-screen state; when the proximity sensor 616 detects When the distance between the user and the front of the terminal 600 gradually increases, the processor 601 controls the display screen 605 to switch from the rest screen state to the bright screen state.
  • FIG. 6 does not constitute a limitation to the terminal 600, and may include more or fewer components than shown, or combine certain components, or adopt different component arrangements.
  • FIG. 7 is a block diagram showing a server 700 according to an exemplary embodiment.
  • the server 700 may have relatively large differences due to different configurations or performances, and may include one or more processors (Central Processing Units, CPU) 701 and one or more memories 702.
  • the storage media included in the memory 702 may be ROM 703 and random access memory (RAM) 704.
  • the memory 702 stores at least one piece of program code, and the at least one piece of program code is loaded and executed by the processor 701 to implement the image processing methods provided by the foregoing method embodiments.
  • the server may also have a wired or wireless network interface 705, an input/output interface 706 and other components for input and output.
  • the server 700 may also include a large-capacity storage device 707, and the server 700 may also include other devices for implementing device functions. The components of, do not repeat them here.
  • a storage medium including program code such as a memory including program code
  • the program code may be executed by a processor of an electronic device to complete the image processing method described above.
  • the storage medium may be a non-transitory computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, and optical data storage Equipment etc.
  • a computer program product is also provided.
  • the program code in the computer program product is executed by the processor of the electronic device, the electronic device can execute the above-mentioned image processing method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法、装置、电子设备及存储介质,涉及信息处理技术领域。本方法包括:通过检测目标图像中的目标区域(101),根据目标区域,确定目标图像的第一位置点,该第一位置点用于表示目标区域中与目标区域的检测方式相关的位置点(102),根据第一位置点和裁剪范围信息,对目标图像进行裁剪(103),该裁剪范围信息用于表示目标图像中需要保留的区域的尺寸。

Description

图像处理方法、装置、电子设备及存储介质
本公开要求在2019年09月06日提交的申请号为201910843971.5、发明名称为“图像处理方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中,该中国专利申请要求在2019年07月12日提交的申请号为201910632034.5、发明名称为“图像处理方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及信息处理技术领域,尤其涉及一种图像处理方法、装置、电子设备及存储介质。
背景技术
在用户的生活和工作中,往往需要对目标图像进行裁剪,以适用于各种展示的需要。其中的一种场景为:将目标图像上传到app(application,应用程序)中,如微信等,作为应用程序的头像进行展示。绝大多数情况下,目标图像都的形状不符合应用程序的要求,例如,目标图像不是正方形,而将目标图像上传到应用程序中作为头像时,往往要求头像为正方形,因此,需要对目标图像进行裁剪。
目前,对目标图像的裁剪方法有两种,第一种裁剪方法为:用户手动自己进行裁剪,这种裁剪方法比较耗费用户时间,效率较低,无法对大批量目标图像进行裁剪;第二种裁剪方法为:直接将目标图像的周边裁剪掉,只保留目标图像的中心区域进行展示,这种裁剪方法容易将用户需要保留的区域裁剪掉,裁剪效果差。
发明内容
本公开提供一种图像处理方法、装置、电子设备及存储介质,以至少解决相关技术中对目标图像的裁剪,耗费时间长、效率低以及裁剪效果差的问题。本公开的技术方案如下:
根据本公开实施例的第一方面,提供一种图像处理方法,包括:
检测目标图像中的目标区域;
根据所述目标区域,在所述目标图像中确定第一位置点,所述第一位置点用于表示所述目标区域中与所述目标区域的检测方式相关的位置点;
根据所述第一位置点和裁剪范围信息,对所述目标图像进行裁剪,所述裁剪范围信息 用于表示所述目标图像中需要保留的区域的尺寸。
可选的,所述根据所述第一位置点和所述裁剪范围信息,对所述目标图像进行裁剪的步骤,包括:
检测所述目标区域在所述目标图像中的面积占比;
当所述目标区域在所述目标图像中的面积占比大于或等于设定阈值时,将所述第一位置点确定为所述第二位置点;
当所述目标区域在所述目标图像中的面积占比小于所述设定阈值时,在所述目标图像中确定满足目标条件的第二位置点,使得所述第一位置点与所述第二位置点间隔预设距离,所述预设距离大于0;
根据所述第二位置点和所述裁剪范围信息,对所述目标图像进行裁剪。
可选的,所述根据所述第二位置点和所述裁剪范围信息,对所述目标图像进行裁剪的步骤,包括:
按照目标放大比例对所述目标图像进行放大处理,所述目标放大比例由所述目标区域在所述目标图像中的面积占比确定;
以所述第二位置点为中心,根据所述裁剪范围信息所指示的尺寸,对放大后的所述目标图像进行裁剪。
可选的,所述根据所述第二位置点和所述裁剪范围信息,对所述目标图像进行裁剪的步骤,包括:
按照目标缩小比例调整所述裁剪范围信息所指示的尺寸,所述目标缩小比例由所述目标区域在所述目标图像中的面积占比确定;
以所述第二位置为中心,根据调整后的所述裁剪范围信息所指示的尺寸,对所述目标图像进行裁剪。
可选的,所述检测目标图像中的目标区域的步骤,包括:
对所述目标图像进行对象检测;
当在所述目标图像中检测到至少一个对象时,按照预设条件从所述至少一个对象中筛选出目标对象,将所述目标对象对应的检测框包括的区域确定为目标区域;
当在所述目标图像中未检测到对象时,识别所述目标图像中的显著区域,将所述显著区域确定为目标区域。
可选的,所述检测目标图像中的目标区域的步骤,包括:
识别所述目标图像中的显著区域,将所述显著区域确定为目标区域。
可选的,所述检测目标图像中的目标区域的步骤,包括:
对所述目标图像进行对象检测,得到至少一个对象;
按照预设条件从所述至少一个对象中筛选出目标对象,将所述目标对象对应的检测框包括的区域确定为目标区域。
可选的,所述目标图像包括多个对象,所述根据所述第一位置点和所述裁剪范围信息,对所述目标图像进行裁剪的步骤,包括:
根据所述目标对象对应的检测框的位置以及除所述目标对象外的其他对象对应的检测框的位置,调整所述第一位置点和所述裁剪范围信息所指示的尺寸中的至少一种。
可选的,所述根据所述目标区域,在所述目标图像中确定第一位置点的步骤,包括下述任意一种:
响应于所述目标区域基于对象检测得到,将所述目标区域的中心,确定为所述第一位置点;
响应于所述目标区域基于对象检测得到,将所述目标区域中的任一对象特征点,确定为所述第一位置点;
响应于所述目标区域基于显著性检测得到,将所述目标区域的重心,确定为所述第一位置点。
根据本公开实施例的第二方面,提供一种图像处理装置,包括:
目标区域检测模块,被配置为检测目标图像中的目标区域;
位置点确定模块,被配置为根据所述目标区域,在所述目标图像中确定第一位置点,所述第一位置点用于表示所述目标区域中与所述目标区域的检测方式相关的位置点;
目标图像裁剪模块,被配置为根据所述第一位置点和裁剪范围信息,对所述目标图像进行裁剪,所述裁剪范围信息用于表示所述目标图像中需要保留的区域的尺寸。
可选的,所述目标图像裁剪模块,包括:
面积占比检测子模块,被配置为检测所述目标区域在所述目标图像中的面积占比;
第一确定子模块,被配置为当所述目标区域在所述目标图像中的面积占比大于或等于设定阈值时,将所述第一位置点确定为所述第二位置点;
第二确定子模块,被配置为当所述目标区域在所述目标图像中的面积占比小于所述设定阈值时,在所述目标图像中确定满足目标条件的第二位置点,使得所述第一位置点与所 述第二位置点间隔预设距离,所述预设距离大于0;
目标图像裁剪子模块,被配置为根据所述第二位置点和所述裁剪范围信息,对所述目标图像进行裁剪。
可选的,所述目标图像裁剪子模块,被配置为按照目标放大比例对所述目标图像进行放大处理,所述目标放大比例由所述目标区域在所述目标图像中的面积占比确定;以所述第二位置点为中心,根据所述裁剪范围信息所指示的尺寸,对放大后的所述目标图像进行裁剪。
可选的,所述目标图像裁剪子模块,被配置为按照目标缩小比例调整所述裁剪范围信息所指示的尺寸,所述目标缩小比例由所述目标区域在所述目标图像中的面积占比确定;以所述第二位置为中心,根据调整后的所述裁剪范围信息所指示的尺寸,对所述目标图像进行裁剪。
可选的,所述目标区域检测模块,包括:
对象检测子模块,被配置为对所述目标图像进行对象检测;
目标对象筛选子模块,被配置为当在所述目标图像中检测到至少一个对象时,按照预设条件从所述至少一个对象中筛选出目标对象,将所述目标对象对应的检测框包括的区域确定为目标区域;
显著区域识别子模块,被配置为当在所述目标图像中未检测到对象时,识别所述目标图像中的显著区域,并将所述显著区域确定为目标区域。
可选的,所述目标区域检测模块,包括:
显著区域识别子模块,被配置为识别所述目标图像中的显著区域,并将所述显著区域确定为目标区域。
可选的,所述目标区域检测模块,包括:
对象检测子模块,被配置为对所述目标图像进行对象检测,得到至少一个对象;
目标对象筛选子模块,被配置为按照预设条件从所述至少一个对象中筛选出目标对象,将所述目标对象对应的检测框包括的区域确定为目标区域。
可选的,所述目标图像包括多个对象,所述目标图像裁剪模块,被配置为根据所述目标对象对应的检测框的位置以及除所述目标对象外的其他对象对应的检测框的位置,调整所述第一位置点和所述裁剪范围信息所指示的尺寸中的至少一种。
可选的,所述位置点确定模块,被配置为执行下述任意一种步骤:
响应于所述目标区域基于对象检测得到,将所述目标区域的中心,确定为所述第一位置点;
响应于所述目标区域基于对象检测得到,将所述目标区域中的任一对象特征点,确定为所述第一位置点;
响应于所述目标区域基于显著性检测得到,将所述目标区域的重心,确定为所述第一位置点。
根据本公开实施例的第三方面,提供了一种电子设备,包括:
处理器;
用于存储所述处理器可执行程序代码的存储器;
其中,所述处理器被配置为执行:
检测目标图像中的目标区域;
根据所述目标区域,在所述目标图像中确定第一位置点,所述第一位置点用于表示所述目标区域中与所述目标区域的检测方式相关的位置点;
根据所述第一位置点和裁剪范围信息,对所述目标图像进行裁剪,所述裁剪范围信息用于表示所述目标图像中需要保留的区域的尺寸。
可选的,所述处理器还用于执行:
检测所述目标区域在所述目标图像中的面积占比;
当所述目标区域在所述目标图像中的面积占比大于或等于设定阈值时,将所述第一位置点确定为所述第二位置点;
当所述目标区域在所述目标图像中的面积占比小于所述设定阈值时,在所述目标图像中确定满足目标条件的第二位置点,使得所述第一位置点与所述第二位置点间隔预设距离,所述预设距离大于0;
根据所述第二位置点和所述裁剪范围信息,对所述目标图像进行裁剪。
可选的,所述处理器还用于执行:
按照目标放大比例对所述目标图像进行放大处理,所述目标放大比例由所述目标区域在所述目标图像中的面积占比确定;
以所述第二位置点为中心,根据所述裁剪范围信息所指示的尺寸,对放大后的所述目标图像进行裁剪。
可选的,所述处理器还用于执行:
按照目标缩小比例调整所述裁剪范围信息所指示的尺寸,所述目标缩小比例由所述目标区域在所述目标图像中的面积占比确定;
以所述第二位置为中心,根据调整后的所述裁剪范围信息所指示的尺寸,对所述目标图像进行裁剪。
可选的,所述处理器还用于执行:
对所述目标图像进行对象检测;
当在所述目标图像中检测到至少一个对象时,按照预设条件从所述至少一个对象中筛选出目标对象,将所述目标对象对应的检测框包括的区域确定为目标区域;
当在所述目标图像中未检测到对象时,识别所述目标图像中的显著区域,将所述显著区域确定为目标区域。
可选的,所述处理器还用于执行:
识别所述目标图像中的显著区域,将所述显著区域确定为目标区域。
可选的,所述处理器还用于执行:
对所述目标图像进行对象检测,得到至少一个对象;
按照预设条件从所述至少一个对象中筛选出目标对象,将所述目标对象对应的检测框包括的区域确定为目标区域。
可选的,所述处理器还用于执行:
根据所述目标对象对应的检测框的位置以及除所述目标对象外的其他对象对应的检测框的位置,调整所述第一位置点和所述裁剪范围信息所指示的尺寸中的至少一种。
可选的,所述处理器还用于执行下述任一步骤:
响应于所述目标区域基于对象检测得到,将所述目标区域的中心,确定为所述第一位置点;
响应于所述目标区域基于对象检测得到,将所述目标区域中的任一对象特征点,确定为所述第一位置点;
响应于所述目标区域基于显著性检测得到,将所述目标区域的重心,确定为所述第一位置点。
所述程序代码,以实现上述的图像处理方法。
根据本公开实施例的第四方面,提供了一种存储介质,当所述存储介质中的程序代码由电子设备的处理器执行时,使得电子设备能够执行上述的图像处理方法。
根据本公开实施例的第五方面,提供一种计算机程序产品,当所述计算机程序产品中的程序代码由电子设备的处理器执行时,使得电子设备能够执行上述的图像处理方法。
本公开的实施例提供的技术方案至少带来以下有益效果:
通过检测目标图像中的目标区域,根据目标区域在目标图像中确定与该目标区域的检测方式相关的第一位置点,可以得到用户可能感兴趣的点,将该第一位置点与用于表示目标图像中需要保留的区域的尺寸的裁剪范围信息相结合,对目标图像进行自动裁剪,使得裁剪后的目标图像包括用户需要保留的区域,用户不再需要对其进行手动裁剪,且裁剪效果更符合用户的需求。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理,并不构成对本公开的不当限定。
图1是根据一示例性实施例示出的一种图像处理方法的流程图;
图2是根据一示例性实施例示出的另一种图像处理方法的流程图;
图3是根据一示例性实施例示出的对目标图像进行裁剪的示意图;
图4是根据一示例性实施例示出的一种图像处理装置的框图;
图5是根据一示例性实施例示出的另一种图像处理装置的框图;
图6是根据一示例性实施例示出的一种终端的框图;
图7是根据一示例性实施例示出的一种服务器的框图。
具体实施方式
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
本公开实施例应用于各种需要对目标图像进行裁剪的场景,例如,将目标图像上传到指定应用程序中作为头像时,需要将目标图像裁剪成正方形作为头像进行展示;或者,将大量目标图像上传到指定应用程序中进行排列展示,每个展示的目标图像都需要被裁剪成所需的形状,如用户在展示个人图像的个人主页时,需要将展示的图像裁剪成正方形;或者,分享某一链接时,该分享链接包括文字和目标图像,分享链接中的目标图像也需要裁剪。
当然,可以理解的是,本公开实施例不局限于上述的目标图像的裁剪场景。
图1是根据一示例性实施例示出的一种图像处理方法的流程图,如图1所示,该方法可以包括以下步骤:
在步骤S101中,检测目标图像中的目标区域。
在本公开实施例中,目标图像是指待裁剪的图像,目标图像实际上可以为视频图像或静态图像等,对目标图像进行检测,以识别目标图像中的目标区域。
其中,目标区域可以是目标图像中的目标对象所在的区域或者目标图像中的显著区域。
可选的,可采用Faster-RCNN算法(Faster Regions with CNN features,快速CNN特征区域提取)、SSD算法(Single Shot MultiBox Detector,单发射击多框预测)等来检测目标图像中的目标对象,将目标图像中的目标对象所在的区域,也即目标对象对应的检测框包括的区域作为目标区域。
可选的,若在目标图像中未检测到目标对象时,对目标图像进行显著性检测,显著性检测是一种通过对图像颜色、强度、方向等特征进行分析,计算图像显著性,生成图像的显著性图的技术,图像的显著性图是一幅和原始图像(即目标图像)尺寸相同,或者等比例缩小后的灰度图,该灰度图中每个像素都用一个具体的灰度值来表示,不同的灰度值表示不同的显著程度,则可根据灰度值区分目标图像中的显著区域和非显著区域,将显著区域确定为目标区域。
需要说明的是,也可以不对目标图像进行对象检测,而是直接对目标图像进行显著性检测。
在步骤S102中,根据该目标区域,在该目标图像中确定第一位置点,该第一位置点用于表示该目标区域中与该目标区域的检测方式相关的位置点。
在本公开实施例中,根据目标图像中的目标区域,可以在目标图像中确定与该目标区 域的检测方式相关的第一位置点。响应于目标区域基于对象检测得到,第一位置点可以是目标区域的中心;响应于目标区域基于显著性检测得到,第一位置可以是该目标区域的重心。可选的,响应于目标区域基于对象检测得到时,第一位置点还可以是目标区域中的任一对象特征点,例如,目标区域为人脸所在的区域,也即人脸对应的检测框包括的区域时,对象特征点可以为人脸中的眼睛或鼻子等,将人脸中的眼睛或鼻子等作为第一位置点。
需要说明的是,第一位置点可以是用户关注的区域的中心位置点,则第一位置点也可以称为关注中心点。
在对目标图像进行裁剪时,可以将第一位置点作为裁剪时的中心位置,也即裁剪中心;还可以根据关注中心点,也即第一位置点,确定裁剪中心。也即裁剪中心可根据关注中心点灵活设置,裁剪中心可以与关注中心点重合,也可以与关注中心点不重合。
可以根据目标区域在目标图像中的面积占比,来确定裁剪中心是否与关注中心点重合。
可选的,获取裁剪范围信息;该裁剪范围信息用于标识目标图像中需要保留的区域的尺寸。
在确定了裁剪中心,也即第一位置点后,还需要确定目标图像中哪些区域需要保留,哪些区域需要被裁剪掉,即确定在目标图像中进行裁剪的裁剪范围信息,该裁剪范围信息用于确定目标图像中需要保留的区域的尺寸。
可选的,需要保留的区域可以通过裁剪框来进行指示,该裁剪框可以由第一位置点和裁剪范围信息确定,用于向用户展示裁剪后保留的区域。其中,位于裁剪框之内的区域,是目标图像中需要保留的区域,也即裁剪后保留的区域,位于裁剪框之外的区域,是目标图像中被裁剪掉的区域。
在步骤S103中,根据第一位置点和裁剪范围信息,对目标图像进行裁剪。
在本公开实施例中,根据裁剪中心也即第一位置点和裁剪范围信息,可确定裁剪框的具体位置和大小,然后使用裁剪框对目标图像进行裁剪,使得裁剪后的目标图像包括用户需要保留的区域,裁剪效果更符合用户的需求。
其中,裁剪框的形状可以为正方形、圆形、正六边形等,使用裁剪框裁剪后的目标图像的形状也依次为正方形、圆形、正六边形等。
本公开的实施例提供的技术方案至少带来以下有益效果:
通过检测目标图像中的目标区域,根据目标区域确定目标图像中的关注中心点,也即 在目标图像中确定与该目标区域的检测方式相关的第一位置点,可以得到用户可能感兴趣的点,将该第一位置点与用于表示目标图像中需要保留的区域的尺寸的裁剪范围信息相结合,对目标图像进行自动裁剪,使得裁剪后的目标图像包括用户需要保留的区域,用户不再需要对其进行手动裁剪,且裁剪效果更符合用户的需求。
图2是根据一示例性实施例示出的另一种图像处理方法的流程图,如图2所示,该方法可以包括以下步骤:
在步骤S201中,对目标图像进行对象检测。
在本公开实施例中,可以采用图像检测技术对目标图像进行对象检测,识别目标图像中的对象,该图像检测技术可以采用Faster-RCNN算法(Faster Regions with CNN features,快速CNN特征区域提取)、SSD算法(Single Shot MultiBox Detector,单发射击多框预测)等,本公开实施例对此不进行限制。
目标图像中的对象可以包括多种类型,如人脸、动物、植物等,在进行人脸检测时,检测的对象还可以精细到人脸特征点。
如图3所示,目标图像包括小男孩01、小女孩02以及跷跷板03,采用图像检测技术检测目标图像中包括3个对象,分别为小男孩01的脸部、小女孩02的脸部以及跷跷板03的支点。
在步骤S202中,当在目标图像中检测到至少一个对象时,按照预设条件从该至少一个对象中筛选出目标对象,将该目标对象所在区域,也即该目标对象对应的检测框包括的区域,确定为目标区域。
在本公开实施例中,当在目标图像中检测到至少一个对象时,输出一个或多个检测框,该检测框通常为矩形,也可以称为矩形框。该矩形框用来框出目标图像中的对象,按照预设条件从至少一个对象中筛选出用户最需要保留的对象作为目标对象,即筛选出的目标对象对应的矩形框包括的区域,为目标对象所在的区域,即目标区域。
可选的,从目标图像中的所有对象中选取对应的矩形框的面积占比最大的对象作为目标对象,或者,选取对应的矩形框的面积占比最小的对象作为目标对象,或者,在目标图像的所有对象中,计算任意两个对象对应的矩形框之间的距离,将矩形框之间的距离小于设定距离的多个对象中的任意一个对象确定为目标对象,也就是,从目标图像中选取离得较近的几个对象中的一个对象作为目标对象。
如图3所示,检测出目标图像包括3个对象,分别为小男孩01的脸部、小女孩02 的脸部以及跷跷板03的支点,对应的3个矩形框分别为矩形框S1、矩形框S2和矩形框S3,矩形框S1用来框出小男孩01的脸部,矩形框S2用来框出小女孩02的脸部,矩形框S3用来框出跷跷板03的支点;当预设条件为选取对应的矩形框的面积占比最大的对象时,从3个对象中选取面积占比最大的矩形框S3对应的对象,也即跷跷板03的支点作为目标对象,并将矩形框S3包括的区域确定为目标区域。
例如,当预设条件为选取离得较近的几个对象中的一个对象作为目标对象时,目标图像中包括5个对象,分别为人脸1、人脸2、人脸3、人脸4和人脸5,人脸1、人脸2和人脸3离得比较近,人脸4与人脸1、人脸2、人脸3和人脸5离得较远,人脸5与人脸1、人脸2和人脸3也离得较远,因此,从人脸1、人脸2和人脸3中选取任意一个对象作为目标对象,如选取人脸2作为目标对象,则人脸2所在的区域,即人脸2对应的矩形框包括的区域作为目标区域。
在步骤S203中,当在目标图像中未检测到对象时,识别目标图像中的显著区域,将该显著区域确定为目标区域。
在本公开实施例中,在对目标图像进行对象检测时,还有可能未检测到对象,也就是在目标图像中未检测到目标对象,此时,需要对目标图像进行显著性检测,识别目标图像中的显著区域和非显著区域,并将显著区域确定为目标区域。
上述步骤S203可以包括如下步骤A1和A2:
在步骤A1中,将目标图像转化为灰度图像;
在步骤A2中,将灰度图像中灰度值位于预设灰度范围内的区域,确定为显著区域。
目标图像包括多个像素,获取每个像素的R(红色)值、G(绿色)值和B(蓝色)值,根据每个像素的R值、G值和B值,按照预设的转化公式,得到对应像素的灰度值,按照计算得到的灰度值进行显示,可将目标图像转化为灰度图像;可预先设定灰度值范围,将灰度图像中灰度值位于预设灰度范围内的区域确定为显著区域,将灰度图像中灰度值位于设灰度范围外的区域确定为非显著区域,显著区域指的是目标图像中更能吸引视觉注意的位置区域。
例如,预设灰度范围可以为200至255,当灰度图像中某一像素的灰度值为230时,将其确定为显著区域中的像素,当灰度图像中另外一个像素的灰度值为50时,将其确定为非显著区域中的像素,最后统计位于显著区域中的像素,则得到目标图像中的显著区域。
在本公开的一种示例性实施例中,在将目标图像转化为灰度图像之后,对灰度图像进 行二值化处理,可设定一个灰度阈值,将灰度图像中灰度值大于灰度阈值的像素的灰度值设置为255,灰度图像中灰度值小于灰度阈值的像素的灰度值设置为0,使得灰度图中的像素的灰度值为0或255,整个灰度图像只有黑色和白色两种视觉效果,可将灰度值为0或255的所有像素点所在的区域确定为显著区域。
例如,可将灰度值为255的所有像素点所在的区域确定为显著区域,灰度值为0的所有像素点所在的区域确定为非显著区域。
需要说明的是,在检测目标图像中的目标区域时,也可以不对目标图像进行对象检测,而是直接进行显著性检测,识别目标图像中的显著区域,将该显著区域确定为目标区域。
需要说明的是,在检测目标图像中的目标区域时,也可以在对目标图像进行对象检测,后不进行显著性检测,即对目标图像进行对象检测,得到至少一个对象,按照预设条件从至少一个对象中筛选出目标对象,将目标对象对应的检测框包括的区域确定为目标区域。
在步骤S204中,根据目标区域,在目标图像中确定第一位置点。
根据目标图像中的目标区域,可以在目标图像中确定与该目标区域的检测方式相关的第一位置点。第一位置点可以是用户关注的区域的中心位置点,则第一位置点也可以称为关注中心点。上述步骤S204可以包括如下步骤A3或A4:
在步骤A3中,将目标区域的中心或者重心,确定为目标图像中的关注中心点。
响应于目标区域基于对象检测得到,第一位置点可以是目标区域的中心;响应于目标区域基于显著性检测得到,第一位置可以是该目标区域的重心。
也即,当目标区域为目标对象所在区域时,将目标对象所在区域,也即目标对像对应的检测框包括的区域的中心确定为关注中心点;当目标区域为显著区域时,将显著区域的重心作为关注中心点。
在步骤A4中,将目标区域中的任一对象特征点确定为目标图像的关注中心点。
响应于目标区域基于对象检测得到时,可检测目标区域的至少一个对象特征点,将目标区域中所有对象特征点中的任一对象特征点确定为目标图像的关注中心点。
如图3所示,将小男孩01的脸部确定为目标对象,将小男孩01的脸部所在的区域,即矩形框S1确定为目标区域,目标区域S1中的对象特征点包括小男孩01的眼睛、鼻子和嘴巴等,将目标区域S1中的对象特征点S11,即小男孩01的鼻子确定为关注中心点。
在步骤S205中,检测目标区域在目标图像中的面积占比。
计算目标区域的面积与目标图像中的面积之间的比值,得到目标区域在目标图像中的 面积占比。
例如,目标区域为小男孩01的脸部所在的区域S1,检测到小男孩01的脸部所在的区域S1的面积为4mm 2,目标图像的面积为120mm 2,则小男孩01的脸部所在的区域S1在目标图像的面积占比为1/30。
在步骤S206中,当目标区域在目标图像中的面积占比大于或等于设定阈值时,将第一位置点确定为第二位置点,该第二位置点用于表示需要保留的区域的中心位置。
当目标区域在目标图像中的面积占比大于或等于设定阈值时,确定目标图像中的目标区域的面积较大,因此,可直接将关注中心点,即第一位置点确定为裁剪时保留的区域的中心位置,也即第二位置点。由于该裁剪时保留的区域可以通过裁剪框来进行指示,因此该第二位置点也可以称为裁剪框的裁剪中心。后续可以基于该第二位置点直接裁剪出完整的目标区域。
其中,设定阈值可人为设定,也可根据裁剪框包括的区域的面积与目标图像的面积之间的比值确定;此时,关注中心点和裁剪中心重合,也即第一位置点和第二位置点重合。
例如,设定阈值为1/50,在图3中,小男孩01的脸部所在的区域S1(即目标区域)在目标图像的面积占比为1/30,则确定目标区域S1在目标图像中的面积占比大于设定阈值,目标区域S1中的小男孩01的鼻子S11为关注中心点,相应的,将小男孩01的鼻子S11确定为第二位置点,也即显示的裁剪框的裁剪中心,以便后续可以基于该鼻子S11直接裁剪出小男孩01的整个脸部区域。
在步骤S207中,当目标区域在该目标图像中的面积占比小于该设定阈值时,在目标图像中确定满足目标条件的第二位置点,使得第一位置点与第二位置点间隔预设距离,预设距离大于0。
也即,将关注中心点的位置设置在显示的裁剪框的指定位置,使得裁剪框的裁剪中心与关注中心点间隔预设距离,预设距离大于0。
需要说明的是,第一位置点和第二位置点可以在裁剪框的水平分割线或者垂直分割线上,通过水平分割线或者垂直分割线以及预设距离来确定第二位置点;也可以基于第一位置点,通过长度为预设距离的向量来确定第二位置点。
当目标区域在目标图像中的面积占比小于设定阈值时,确定目标图像中的目标区域面积较小,此时,可以将关注中心点设置在显示的裁剪框的指定位置,如设置在裁剪框的中上部,使得后续可以基于第二位置点,将裁剪框包括的目标区域以及与目标区域相关的部 分都裁剪出来。
其中,预设距离可人为设定,预设距离不为0,预设距离大于0;此时,裁剪中心和关注中心点不重合,裁剪中心与中心点间隔预设距离,也即第一位置点和第二位置点间隔预设距离。
例如,目标区域为目标图像中的人脸所在区域,若人脸在目标图像的面积占比小于设定阈值时,则将关注中心点设置在显示的裁剪框的中上部,使得后续可以将裁剪框包括的人脸以及人物的上半身都裁剪出来。
在步骤S208中,当目标区域在目标图像中的面积占比大于或等于设定阈值时,根据第二位置点和裁剪范围信息,对目标图像进行裁剪,结束。
当目标区域在目标图像中的面积占比大于或等于设定阈值时,确定目标图像中的目标区域面积较大,无需对目标图像进行处理,只需根据第二位置点和裁剪范围信息,对目标图像进行裁剪。
其中,裁剪范围信息指示的尺寸可固定不变,该尺寸即为显示的裁剪框的尺寸,该裁剪框的尺寸可以大于目标区域的尺寸,按照裁剪框的尺寸,使得裁剪框内包括完整的目标区域,此时裁剪框内还可以包括目标区域周边的其他区域,使得裁剪框内的区域是满足裁剪框尺寸的最大区域。
如图3所示,小男孩01的鼻子S11为关注中心点,关注中心点即裁剪框的裁剪中心,小男孩01的脸部所在的区域S1(即目标区域)在目标图像的面积占比为1/30,设定阈值为1/50,则确定目标区域S1在目标图像中的面积占比大于设定阈值时,根据第二位置和裁剪范围信息,确定裁剪框显示的位置,则显示的裁剪框如图3中的N所示,位于裁剪框N中的区域不仅包括小男孩01的脸部所在的区域S1,还包括小男孩01的身体部位以及跷跷板03的部分区域等,对该目标图像进行裁剪,即可得到裁剪框N中的区域。
在步骤S209中,当目标区域在目标图像中的面积占比小于设定阈值时,按照目标放大比例对目标图像进行放大处理,该目标放大比例由目标区域在目标图像中的面积占比确定。
当目标区域在目标图像中的面积占比小于设定阈值时,确定目标图像中的目标区域面积较小,因此,需要按照目标放大比例对目标图像进行放大处理,放大后的目标图像的短边会大于裁剪框的尺寸,即裁剪框全部位于目标图像内部。
通过对目标图像进行放大处理,使得目标图像中的目标区域的面积也增大,便于后续 裁剪的目标图像中可以更多的包括用户需要保留的目标区域。
在上述步骤S209之前还可以包括如下步骤A5:
在步骤A5中,根据目标区域在目标图像中的面积占比,确定目标放大比例。
在按照目标放大比例对目标图像进行放大处理之前,还需要根据目标区域在目标图像中的面积占比,确定目标放大比例。
当在目标图像中检测到至少一个对象时,根据目标对象在目标图像中的面积占比,确定目标放大比例;当在目标图像中未检测到对象时,根据显著区域在目标图像中的面积占比,确定目标放大比例。
目标区域在目标图像中的面积占比与目标放大比例呈反比,当目标区域在目标图像中的面积占比越小,对应的目标放大比例越大;当目标区域在目标图像中的面积占比越大,对应的目标放大比例越小。
在本公开一种示例性实施例中,当目标区域在目标图像的面积占比超出预设上限时,若直接采用裁剪框对目标图像进行裁剪,可能会将目标区域裁剪成两部分,例如,目标区域为人脸所在区域,当人脸在目标图像的面积占比超出预设上限时,使用裁剪框可能裁剪出大部分人脸,这种裁剪效果不符合用户的需求。因此,可根据目标区域在目标图像中的面积占比,确定预设缩小比例,按照预设缩小比例对目标图像进行缩小处理,然后,根据缩小后的目标图像中的目标区域以及裁剪框的尺寸,确定裁剪框的裁剪范围,使得在使用裁剪框裁剪缩小后的目标图像时,可裁剪出完整的目标区域。
在步骤S210中,以第二位置点为中心,根据裁剪范围信息所指示的尺寸,对放大后的目标图像进行裁剪。
在对目标图像进行放大处理后,以第二位置点为中心,根据裁剪范围信息所指示的尺寸,对放大后的目标图像进行裁剪。
根据裁剪范围信息所指示的尺寸,可以确定目标图像中裁剪的区域的大小,由于目标区域也被放大,使得位于显示的裁剪框中的大部分区域为放大后的目标区域,即目标区域在裁剪框内的占比较大,目标区域外的一少部分区域可位于裁剪框内,便于后续通过裁剪框裁剪的目标图像中可以更多的包括用户需要保留的目标区域,且由于裁剪框内的目标区域经过放大,裁剪后的目标区域会更加清楚。
可选的,可以按照目标缩小比例调整裁剪范围信息所指示的尺寸,该目标缩小比例由目标区域在目标图像中的面积占比确定。以该第二位置为中心,根据调整后的裁剪范围信 息所指示的尺寸,对目标图像进行裁剪。
相应的,裁剪范围信息指示的尺寸可变,如裁剪范围信息指示的可以是固定不变的边框比例,如1:1、3:4、以及16:9等。可以按照该边框比例调整裁剪时尺寸的大小,也即调整显示的裁剪框的尺寸,使得裁剪框的最长边不大于目标图像的最短边,从而整个裁剪框都在目标图像中,并且,使得裁剪框最短边,不小于目标区域的最短边,从而整个目标区域都在裁剪框中,此时,裁剪框内还包括目标区域周边的其他区域,使得裁剪框内的区域是满足裁剪框的边框比例的最大区域。
在本公开一种示例性实施例中,当在执行步骤S202时,如果确定目标图像包括多个对象,则在步骤S206或步骤S207之后,还可以包括如下步骤A6:
在步骤A6中,根据目标对象对应的检测框的位置以及除该目标对象外的其他对象对应的检测框的位置,调整第一位置点和裁剪范围信息所指示的尺寸中的至少一种。
当在目标图像检测到多个对象时,从多个对象中筛选一个对象作为目标对象,根据目标对象对应的检测框确定第一位置点,也即根据目标对象所在区域确定显示的裁剪框的裁剪中心,此时,还需要根据目标图像中的目标对象对应的检测框的位置以及除目标对象外的其他对象对应的检测框的位置,进行进一步的调整,如调整第一位置点的位置、或者调整裁剪范围信息所指示的尺寸、或者对两者同时调整,使得在裁剪时不将某个对象分割开。
可选的,裁剪范围信息指示的尺寸可固定不变,当根据第一位置点和裁剪范围信息,确定裁剪时会将任一对象分割时,根据目标对象对应的检测框的位置以及除目标对象外的其他对象对应的检测框的位置,对目标图像进行缩小,基于第一位置点和裁剪范围信息,对缩小后的目标图像进行裁剪,使得裁剪时未将缩小后的目标图像中的任一对象分割,即使得裁剪框的裁剪范围内的区域包括完整的对象。
例如,目标图像包括3个人脸,将中间的人脸所在区域确定为目标区域,第一位置点为中间的人脸的鼻子,若不考虑其余两个人脸的位置时,根据第一位置点和裁剪范围信息所指示的尺寸,在对确目标图像进行裁剪时,有可能将其余两个人脸中的部分区域设置在裁剪后得到的区域内,也就是裁剪时会将其余两个人脸分割开,这种裁剪方式不符合用户需求,此时,需要考虑其余两个人脸的位置,使得裁剪框不将其余两个人脸分割开,若3个人脸所在的区域的尺寸大于裁剪框的尺寸,可对目标图像进行缩小,使得3个人脸所在的区域均可位于裁剪得到的区域内,使得裁剪框将3个人脸都裁剪出来,裁剪后的目标图像包括更多的对象,也可以对裁剪范围信息所指示的尺寸进行缩小,使得中间的人脸位于 裁剪得到的区域内,而其他人脸不位于该区域内。
可选的,裁剪范围信息指示的尺寸可变,如裁剪范围信息指示的可以是固定不变的边框比例,如1:1、3:4、以及16:9等。可以按照该边框比例缩小裁剪时的尺寸,也即调整显示的裁剪框的尺寸,基于第一位置点和调整后的裁剪范围信息所指示的尺寸,对目标图像进行裁剪,使得裁剪时未将目标图像中的任一对象分割,即使得裁剪框的裁剪范围内的区域包括完整的对象。
本公开的实施例提供的技术方案至少带来以下有益效果:通过检测目标图像中的目标区域,根据目标区域在目标图像中确定与该目标区域的检测方式相关的第一位置点,可以得到用户可能感兴趣的点,将该第一位置点与用于表示目标图像中需要保留的区域的尺寸的裁剪范围信息相结合,对目标图像进行自动裁剪,使得裁剪后的目标图像包括用户需要保留的区域,用户不再需要对其进行手动裁剪,且裁剪效果更符合用户的需求;且考虑到目标区域在目标图像中的面积占比,在目标占比小于设定阈值时,对目标图像进行放大处理,使得裁剪框裁剪的目标图像中可以更多的包括用户需要保留的目标区域,且裁剪后的目标区域更加清楚。
图4是根据一示例性实施例示出的一种图像处理装置框图。参照图4,该图像处理装置400包括:目标区域检测模块401,位置点确定模块402和目标图像裁剪模块403。
目标区域检测模块401,被配置为检测目标图像中的目标区域;
位置点确定模块402,被配置为根据该目标区域,在该目标图像中确定第一位置点,该第一位置点用于表示该目标区域中与该目标区域的检测方式相关的位置点;
目标图像裁剪模块403,被配置为根据该第一位置点和裁剪范围信息,对该目标图像进行裁剪,该裁剪范围信息用于表示该目标图像中需要保留的区域的尺寸。
在图4的基础上,该目标图像裁剪模块403,包括:
面积占比检测子模块4031,被配置为检测该目标区域在该目标图像中的面积占比;
第一确定子模块4032,被配置为当该目标区域在该目标图像中的面积占比大于或等于设定阈值时,将该第一位置点确定为该第二位置点;
第二确定子模块4033,被配置为当该目标区域在该目标图像中的面积占比小于该设定阈值时,在该目标图像中确定满足目标条件的第二位置点,使得该第一位置点与该第二位置点间隔预设距离,该预设距离大于0;
目标图像裁剪子模块4034,被配置为根据该第二位置点和该裁剪范围信息,对该目 标图像进行裁剪。
一种可选的实施方式中,该目标图像裁剪子模块4034,被配置为按照目标放大比例对该目标图像进行放大处理,该目标放大比例由该目标区域在该目标图像中的面积占比确定;以该第二位置点为中心,根据该裁剪范围信息所指示的尺寸,对放大后的该目标图像进行裁剪。
一种可选的实施方式中,该目标图像裁剪子模块4034,被配置为按照目标缩小比例调整该裁剪范围信息所指示的尺寸,该目标缩小比例由该目标区域在该目标图像中的面积占比确定;以该第二位置为中心,根据调整后的该裁剪范围信息所指示的尺寸,对该目标图像进行裁剪。
一种可选的实施方式中,该目标区域检测模块401,包括:
对象检测子模块4011,被配置为对该目标图像进行对象检测;
目标对象筛选子模块4012,被配置为当在该目标图像中检测到至少一个对象时,按照预设条件从该至少一个对象中筛选出目标对象,将该目标对象对应的检测框包括的区域确定为目标区域;
显著区域识别子模块4013,被配置为当在该目标图像中未检测到对象时,识别该目标图像中的显著区域,并将该显著区域确定为目标区域。
一种可选的实施方式中,该目标区域检测模块401,包括:
显著区域识别子模块4013,被配置为识别该目标图像中的显著区域,并将该显著区域确定为目标区域。
一种可选的实施方式中,该目标区域检测模块401,包括:
对象检测子模块4011,被配置为对该目标图像进行对象检测,得到至少一个对象;
目标对象筛选子模块4012,被配置为按照预设条件从该至少一个对象中筛选出目标对象,将该目标对象对应的检测框包括的区域确定为目标区域。
一种可选的实施方式中,该目标图像包括多个对象,该目标图像裁剪模块403,被配置为根据该目标对象对应的检测框的位置以及除该目标对象外的其他对象对应的检测框的位置,调整该第一位置点和该裁剪范围信息所指示的尺寸中的至少一种。
一种可选的实施方式中,该位置点确定模块402,被配置为执行下述任意一种步骤:
响应于该目标区域基于对象检测得到,将该目标区域的中心,确定为该第一位置点;
响应于该目标区域基于对象检测得到,将该目标区域中的任一对象特征点,确定为该 第一位置点;
响应于该目标区域基于显著性检测得到,将该目标区域的重心,确定为该第一位置点。
本公开的实施例提供的技术方案至少带来以下有益效果:
通过检测目标图像中的目标区域,根据目标区域在中确定与该目标区域的检测方式相关的第一位置点,可以得到用户可能感兴趣的点,将该第一位置点与用于表示目标图像中需要保留的区域的尺寸的裁剪范围信息相结合,对目标图像进行自动裁剪,使得裁剪后的目标图像包括用户需要保留的区域,用户不再需要对其进行手动裁剪,且裁剪效果更符合用户的需求。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
在示例性实施例中,还提供了一种电子设备,处理器;用于存储处理器可执行程序代码的存储器;其中,处理器被配置为执行:
检测目标图像中的目标区域;
根据该目标区域,在该目标图像中确定第一位置点,该第一位置点用于表示该目标区域中与该目标区域的检测方式相关的位置点;
根据该第一位置点和裁剪范围信息,对该目标图像进行裁剪,该裁剪范围信息用于表示该目标图像中需要保留的区域的尺寸。
一种可选的实施方式中,该处理器还用于执行:
检测该目标区域在该目标图像中的面积占比;
当该目标区域在该目标图像中的面积占比大于或等于设定阈值时,将该第一位置点确定为该第二位置点;
当该目标区域在该目标图像中的面积占比小于该设定阈值时,在该目标图像中确定满足目标条件的第二位置点,使得该第一位置点与该第二位置点间隔预设距离,该预设距离大于0;
根据该第二位置点和该裁剪范围信息,对该目标图像进行裁剪。
一种可选的实施方式中,该处理器还用于执行:
按照目标放大比例对该目标图像进行放大处理,该目标放大比例由该目标区域在该目标图像中的面积占比确定;
以该第二位置点为中心,根据该裁剪范围信息所指示的尺寸,对放大后的该目标图像 进行裁剪。
一种可选的实施方式中,该处理器还用于执行:
按照目标缩小比例调整该裁剪范围信息所指示的尺寸,该目标缩小比例由该目标区域在该目标图像中的面积占比确定;
以该第二位置为中心,根据调整后的该裁剪范围信息所指示的尺寸,对该目标图像进行裁剪。
一种可选的实施方式中,该处理器还用于执行:
对该目标图像进行对象检测;
当在该目标图像中检测到至少一个对象时,按照预设条件从该至少一个对象中筛选出目标对象,将该目标对象对应的检测框包括的区域确定为目标区域;
当在该目标图像中未检测到对象时,识别该目标图像中的显著区域,将该显著区域确定为目标区域。
一种可选的实施方式中,该处理器还用于执行:
识别该目标图像中的显著区域,将该显著区域确定为目标区域。
一种可选的实施方式中,该处理器还用于执行:
对该目标图像进行对象检测,得到至少一个对象;
按照预设条件从该至少一个对象中筛选出目标对象,将该目标对象对应的检测框包括的区域确定为目标区域。
一种可选的实施方式中,该处理器还用于执行:
根据该目标对象对应的检测框的位置以及除该目标对象外的其他对象对应的检测框的位置,调整该第一位置点和该裁剪范围信息所指示的尺寸中的至少一种。
一种可选的实施方式中,该处理器还用于执行下述任一步骤:
响应于该目标区域基于对象检测得到,将该目标区域的中心,确定为该第一位置点;
响应于该目标区域基于对象检测得到,将该目标区域中的任一对象特征点,确定为该第一位置点;
响应于该目标区域基于显著性检测得到,将该目标区域的重心,确定为该第一位置点。
在本公开实施例中,电子设备可以被提供为终端或者服务器,当电子设备被提供为终端时,可以由该终端实现图像处理方法所执行的操作;当被提供为服务器时,可以由该服 务器实现图像处理方法所执行的操作,该服务器可以接收终端发送的目标图像,由服务器基于接收到的目标图像进行处理;也可以由该服务器和终端交互来实现图像处理方法所执行的操作;也可以由终端向服务器发送图像处理请求,由服务器来进行图像处理,然后将图像处理的结果反馈给终端,由终端输出图像处理的结果。
电子设备被提供为终端时,图6是根据一示例性实施例示出的一种终端600的框图。该终端600可以是:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端600还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端600包括有:处理器601和存储器602。
处理器601可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器601可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器601也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器601可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器601还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器602可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器602还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器602中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器601所执行以实现本公开中方法实施例提供的图像处理方法。
在一些实施例中,终端600还可选包括有:外围设备接口603和至少一个外围设备。处理器601、存储器602和外围设备接口603之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口603相连。具体地,外围设备包括: 射频电路604、显示屏605、摄像头组件606、音频电路607、定位组件608和电源609中的至少一种。
外围设备接口603可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器601和存储器602。在一些实施例中,处理器601、存储器602和外围设备接口603被集成在同一芯片或电路板上;在一些其他实施例中,处理器601、存储器602和外围设备接口603中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路604用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路604通过电磁信号与通信网络以及其他通信设备进行通信。射频电路604将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路604包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路604可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:城域网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路604还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本公开对此不加以限定。
显示屏605用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏605是触摸显示屏时,显示屏605还具有采集在显示屏605的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器601进行处理。此时,显示屏605还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏605可以为一个,设置终端600的前面板;在另一些实施例中,显示屏605可以为至少两个,分别设置在终端600的不同表面或呈折叠设计;在再一些实施例中,显示屏605可以是柔性显示屏,设置在终端600的弯曲表面上或折叠面上。甚至,显示屏605还可以设置成非矩形的不规则图形,也即异形屏。显示屏605可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件606用于采集图像或视频。可选地,摄像头组件606包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦 摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件606还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路607可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器601进行处理,或者输入至射频电路604以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端600的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器601或射频电路604的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路607还可以包括耳机插孔。
定位组件608用于定位终端600的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件608可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统、俄罗斯的格雷纳斯系统或欧盟的伽利略系统的定位组件。
电源609用于为终端600中的各个组件进行供电。电源609可以是交流电、直流电、一次性电池或可充电电池。当电源609包括可充电电池时,该可充电电池可以支持有线充电或无线充电。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端600还包括有一个或多个传感器610。该一个或多个传感器610包括但不限于:加速度传感器611、陀螺仪传感器612、压力传感器613、指纹传感器614、光学传感器615以及接近传感器616。
加速度传感器611可以检测以终端600建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器611可以用于检测重力加速度在三个坐标轴上的分量。处理器601可以根据加速度传感器611采集的重力加速度信号,控制显示屏605以横向视图或纵向视图进行用户界面的显示。加速度传感器611还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器612可以检测终端600的机体方向及转动角度,陀螺仪传感器612可以与加速度传感器611协同采集用户对终端600的3D动作。处理器601根据陀螺仪传感器 612采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器613可以设置在终端600的侧边框和/或显示屏605的下层。当压力传感器613设置在终端600的侧边框时,可以检测用户对终端600的握持信号,由处理器601根据压力传感器613采集的握持信号进行左右手识别或快捷操作。当压力传感器613设置在显示屏605的下层时,由处理器601根据用户对显示屏605的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器614用于采集用户的指纹,由处理器601根据指纹传感器614采集到的指纹识别用户的身份,或者,由指纹传感器614根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器601授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器614可以被设置终端600的正面、背面或侧面。当终端600上设置有物理按键或厂商Logo时,指纹传感器614可以与物理按键或厂商Logo集成在一起。
光学传感器615用于采集环境光强度。在一个实施例中,处理器601可以根据光学传感器615采集的环境光强度,控制显示屏605的显示亮度。具体地,当环境光强度较高时,调高显示屏605的显示亮度;当环境光强度较低时,调低显示屏605的显示亮度。在另一个实施例中,处理器601还可以根据光学传感器615采集的环境光强度,动态调整摄像头组件606的拍摄参数。
接近传感器616,也称距离传感器,通常设置在终端600的前面板。接近传感器616用于采集用户与终端600的正面之间的距离。在一个实施例中,当接近传感器616检测到用户与终端600的正面之间的距离逐渐变小时,由处理器601控制显示屏605从亮屏状态切换为息屏状态;当接近传感器616检测到用户与终端600的正面之间的距离逐渐变大时,由处理器601控制显示屏605从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图6中示出的结构并不构成对终端600的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
电子设备被提供为服务器时,图7是根据一示例性实施例示出的一种服务器700的框图,该服务器700可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处 理器(Central Processing Units,CPU)701和一个或一个以上的存储器702,存储器702中包含的存储介质可以是ROM703和随机存取存储器(RAM)704。其中,该存储器702中存储有至少一条程序代码,该至少一条程序代码由该处理器701加载并执行以实现上述各个方法实施例提供的图像处理方法。当然,该服务器还可以具有有线或无线网络接口705、输入输出接口706等部件,以便进行输入输出,该服务器700还可以包括大容量存储设备707,该服务器700还可以包括其他用于实现设备功能的部件,在此不做赘述。
在示例性实施例中,还提供了一种包括程序代码的存储介质,例如包括程序代码的存储器,上述程序代码可由电子设备的处理器执行以完成上述图像处理方法。可选地,存储介质可以是非临时性计算机可读存储介质,例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
在示例性实施例中,还提供了一种计算机程序产品,当计算机程序产品中的程序代码由电子设备的处理器执行时,使得电子设备能够执行上述的图像处理方法。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (28)

  1. 一种图像处理方法,其特征在于,包括:
    检测目标图像中的目标区域;
    根据所述目标区域,在所述目标图像中确定第一位置点,所述第一位置点用于表示所述目标区域中与所述目标区域的检测方式相关的位置点;
    根据所述第一位置点和裁剪范围信息,对所述目标图像进行裁剪,所述裁剪范围信息用于表示所述目标图像中需要保留的区域的尺寸。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一位置点和所述裁剪范围信息,对所述目标图像进行裁剪的步骤,包括:
    检测所述目标区域在所述目标图像中的面积占比;
    当所述目标区域在所述目标图像中的面积占比大于或等于设定阈值时,将所述第一位置点确定为第二位置点,所述第二位置点用于表示所述需要保留的区域的中心位置;
    当所述目标区域在所述目标图像中的面积占比小于所述设定阈值时,在所述目标图像中确定满足目标条件的第二位置点,使得所述第一位置点与所述第二位置点间隔预设距离,所述预设距离大于0;
    根据所述第二位置点和所述裁剪范围信息,对所述目标图像进行裁剪。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述第二位置点和所述裁剪范围信息,对所述目标图像进行裁剪的步骤,包括:
    按照目标放大比例对所述目标图像进行放大处理,所述目标放大比例由所述目标区域在所述目标图像中的面积占比确定;
    以所述第二位置点为中心,根据所述裁剪范围信息所指示的尺寸,对放大后的所述目标图像进行裁剪。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述第二位置点和所述裁剪范围信息,对所述目标图像进行裁剪的步骤,包括:
    按照目标缩小比例调整所述裁剪范围信息所指示的尺寸,所述目标缩小比例由所述目 标区域在所述目标图像中的面积占比确定;
    以所述第二位置为中心,根据调整后的所述裁剪范围信息所指示的尺寸,对所述目标图像进行裁剪。
  5. 根据权利要求1所述的方法,其特征在于,所述检测目标图像中的目标区域的步骤,包括:
    对所述目标图像进行对象检测;
    当在所述目标图像中检测到至少一个对象时,按照预设条件从所述至少一个对象中筛选出目标对象,将所述目标对象对应的检测框包括的区域确定为目标区域;
    当在所述目标图像中未检测到对象时,识别所述目标图像中的显著区域,将所述显著区域确定为目标区域。
  6. 根据权利要求1所述的方法,其特征在于,所述检测目标图像中的目标区域的步骤,包括:
    识别所述目标图像中的显著区域,将所述显著区域确定为目标区域。
  7. 根据权利要求1所述的方法,其特征在于,所述检测目标图像中的目标区域的步骤,包括:
    对所述目标图像进行对象检测,得到至少一个对象;
    按照预设条件从所述至少一个对象中筛选出目标对象,将所述目标对象对应的检测框包括的区域确定为目标区域。
  8. 根据权利要求5或7任一项所述的方法,其特征在于,所述目标图像包括多个对象,所述根据所述第一位置点和所述裁剪范围信息,对所述目标图像进行裁剪的步骤,包括:
    根据所述目标对象对应的检测框的位置以及除所述目标对象外的其他对象对应的检测框的位置,调整所述第一位置点和所述裁剪范围信息所指示的尺寸中的至少一种。
  9. 根据权利要求1所述的方法,其特征在于,所述根据所述目标区域,在所述目标 图像中确定第一位置点的步骤,包括下述任意一种:
    响应于所述目标区域基于对象检测得到,将所述目标区域的中心,确定为所述第一位置点;
    响应于所述目标区域基于对象检测得到,将所述目标区域中的任一对象特征点,确定为所述第一位置点;
    响应于所述目标区域基于显著性检测得到,将所述目标区域的重心,确定为所述第一位置点。
  10. 一种图像处理装置,其特征在于,包括:
    目标区域检测模块,被配置为检测目标图像中的目标区域;
    位置点确定模块,被配置为根据所述目标区域,在所述目标图像中确定第一位置点,所述第一位置点用于表示所述目标区域中与所述目标区域的检测方式相关的位置点;
    目标图像裁剪模块,被配置为根据所述第一位置点和裁剪范围信息,对所述目标图像进行裁剪,所述裁剪范围信息用于表示所述目标图像中需要保留的区域的尺寸。
  11. 根据权利要求10所述的装置,其特征在于,所述目标图像裁剪模块,包括:
    面积占比检测子模块,被配置为检测所述目标区域在所述目标图像中的面积占比;
    第一确定子模块,被配置为当所述目标区域在所述目标图像中的面积占比大于或等于设定阈值时,将所述第一位置点确定为所述第二位置点;
    第二确定子模块,被配置为当所述目标区域在所述目标图像中的面积占比小于所述设定阈值时,在所述目标图像中确定满足目标条件的第二位置点,使得所述第一位置点与所述第二位置点间隔预设距离,所述预设距离大于0;
    目标图像裁剪子模块,被配置为根据所述第二位置点和所述裁剪范围信息,对所述目标图像进行裁剪。
  12. 根据权利要求11所述的装置,其特征在于,所述目标图像裁剪子模块,被配置为按照目标放大比例对所述目标图像进行放大处理,所述目标放大比例由所述目标区域在所述目标图像中的面积占比确定;以所述第二位置点为中心,根据所述裁剪范围信息所指示的尺寸,对放大后的所述目标图像进行裁剪。
  13. 根据权利要求11所述的装置,其特征在于,所述目标图像裁剪子模块,被配置为按照目标缩小比例调整所述裁剪范围信息所指示的尺寸,所述目标缩小比例由所述目标区域在所述目标图像中的面积占比确定;以所述第二位置为中心,根据调整后的所述裁剪范围信息所指示的尺寸,对所述目标图像进行裁剪。
  14. 根据权利要求10所述的装置,其特征在于,所述目标区域检测模块,包括:
    对象检测子模块,被配置为对所述目标图像进行对象检测;
    目标对象筛选子模块,被配置为当在所述目标图像中检测到至少一个对象时,按照预设条件从所述至少一个对象中筛选出目标对象,将所述目标对象对应的检测框包括的区域确定为目标区域;
    显著区域识别子模块,被配置为当在所述目标图像中未检测到对象时,识别所述目标图像中的显著区域,并将所述显著区域确定为目标区域。
  15. 根据权利要求10所述的装置,其特征在于,所述目标区域检测模块,包括:
    显著区域识别子模块,被配置为识别所述目标图像中的显著区域,并将所述显著区域确定为目标区域。
  16. 根据权利要求10所述的装置,其特征在于,所述目标区域检测模块,包括:
    对象检测子模块,被配置为对所述目标图像进行对象检测,得到至少一个对象;
    目标对象筛选子模块,被配置为按照预设条件从所述至少一个对象中筛选出目标对象,将所述目标对象对应的检测框包括的区域确定为目标区域。
  17. 根据权利要求14或16任一项所述的装置,其特征在于,所述目标图像包括多个对象,所述目标图像裁剪模块,被配置为根据所述目标对象对应的检测框的位置以及除所述目标对象外的其他对象对应的检测框的位置,调整所述第一位置点和所述裁剪范围信息所指示的尺寸中的至少一种。
  18. 根据权利要求10所述的装置,其特征在于,所述位置点确定模块,被配置为执 行下述任意一种步骤:
    响应于所述目标区域基于对象检测得到,将所述目标区域的中心,确定为所述第一位置点;
    响应于所述目标区域基于对象检测得到,将所述目标区域中的任一对象特征点,确定为所述第一位置点;
    响应于所述目标区域基于显著性检测得到,将所述目标区域的重心,确定为所述第一位置点。
  19. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储所述处理器可执行程序代码的存储器;
    其中,所述处理器被配置为执行:
    检测目标图像中的目标区域;
    根据所述目标区域,在所述目标图像中确定第一位置点,所述第一位置点用于表示所述目标区域中与所述目标区域的检测方式相关的位置点;
    根据所述第一位置点和裁剪范围信息,对所述目标图像进行裁剪,所述裁剪范围信息用于表示所述目标图像中需要保留的区域的尺寸。
  20. 根据权利要求19所述的电子设备,其特征在于,所述处理器还用于执行:
    检测所述目标区域在所述目标图像中的面积占比;
    当所述目标区域在所述目标图像中的面积占比大于或等于设定阈值时,将所述第一位置点确定为所述第二位置点;
    当所述目标区域在所述目标图像中的面积占比小于所述设定阈值时,在所述目标图像中确定满足目标条件的第二位置点,使得所述第一位置点与所述第二位置点间隔预设距离,所述预设距离大于0;
    根据所述第二位置点和所述裁剪范围信息,对所述目标图像进行裁剪。
  21. 根据权利要求20所述电子设备,其特征在于,所述处理器还用于执行:
    按照目标放大比例对所述目标图像进行放大处理,所述目标放大比例由所述目标区域 在所述目标图像中的面积占比确定;
    以所述第二位置点为中心,根据所述裁剪范围信息所指示的尺寸,对放大后的所述目标图像进行裁剪。
  22. 根据权利要求20所述的电子设备,其特征在于,所述处理器还用于执行:
    按照目标缩小比例调整所述裁剪范围信息所指示的尺寸,所述目标缩小比例由所述目标区域在所述目标图像中的面积占比确定;
    以所述第二位置为中心,根据调整后的所述裁剪范围信息所指示的尺寸,对所述目标图像进行裁剪。
  23. 根据权利要求19所述的电子设备,其特征在于,所述处理器还用于执行:
    对所述目标图像进行对象检测;
    当在所述目标图像中检测到至少一个对象时,按照预设条件从所述至少一个对象中筛选出目标对象,将所述目标对象对应的检测框包括的区域确定为目标区域;
    当在所述目标图像中未检测到对象时,识别所述目标图像中的显著区域,将所述显著区域确定为目标区域。
  24. 根据权利要求19所述的电子设备,其特征在于,所述处理器还用于执行:
    识别所述目标图像中的显著区域,将所述显著区域确定为目标区域。
  25. 根据权利要求19所述的电子设备,其特征在于,所述处理器还用于执行:
    对所述目标图像进行对象检测,得到至少一个对象;
    按照预设条件从所述至少一个对象中筛选出目标对象,将所述目标对象对应的检测框包括的区域确定为目标区域。
  26. 根据权利要求23或25任一项所述的电子设备,其特征在于,所述处理器还用于执行:
    根据所述目标对象对应的检测框的位置以及除所述目标对象外的其他对象对应的检测框的位置,调整所述第一位置点和所述裁剪范围信息所指示的尺寸中的至少一种。
  27. 根据权利要求19所述的电子设备,其特征在于,所述处理器还用于执行下述任一步骤:
    响应于所述目标区域基于对象检测得到,将所述目标区域的中心,确定为所述第一位置点;
    响应于所述目标区域基于对象检测得到,将所述目标区域中的任一对象特征点,确定为所述第一位置点;
    响应于所述目标区域基于显著性检测得到,将所述目标区域的重心,确定为所述第一位置点。
  28. 一种存储介质,当所述存储介质中的至少一段程序代码由电子设备的处理器执行时,使得电子设备能够执行如权利要求1至9中任一项所述的图像处理方法。
PCT/CN2020/101341 2019-07-12 2020-07-10 图像处理方法、装置、电子设备及存储介质 WO2021008456A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201910632034.5 2019-07-12
CN201910632034 2019-07-12
CN201910843971.5 2019-09-06
CN201910843971.5A CN110706150A (zh) 2019-07-12 2019-09-06 图像处理方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021008456A1 true WO2021008456A1 (zh) 2021-01-21

Family

ID=69194637

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/101341 WO2021008456A1 (zh) 2019-07-12 2020-07-10 图像处理方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN110706150A (zh)
WO (1) WO2021008456A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378617A (zh) * 2021-03-05 2021-09-10 睿魔智能科技(深圳)有限公司 一种图像处理、起立/坐下行为识别方法及系统
CN114580631A (zh) * 2022-03-04 2022-06-03 北京百度网讯科技有限公司 模型的训练方法、烟火检测方法、装置、电子设备及介质
CN114972369A (zh) * 2021-02-26 2022-08-30 北京小米移动软件有限公司 图像处理方法、装置及存储介质

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706150A (zh) * 2019-07-12 2020-01-17 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN111275726B (zh) 2020-02-24 2021-02-05 北京字节跳动网络技术有限公司 图像裁剪方法、装置、设备及存储介质
CN111327841A (zh) * 2020-02-25 2020-06-23 四川新视创伟超高清科技有限公司 基于x86架构的超高清视频切画方法及其系统
CN111461969B (zh) * 2020-04-01 2023-04-07 抖音视界有限公司 用于处理图片的方法、装置、电子设备和计算机可读介质
CN111489286B (zh) * 2020-04-01 2023-04-25 抖音视界有限公司 图片处理方法、装置、设备和介质
CN111462221A (zh) * 2020-04-03 2020-07-28 深圳前海微众银行股份有限公司 待侦测物体阴影面积提取方法、装置、设备及存储介质
CN112132836A (zh) * 2020-08-14 2020-12-25 咪咕文化科技有限公司 视频图像裁剪方法、装置、电子设备及存储介质
CN112700454B (zh) * 2020-12-28 2024-05-14 北京达佳互联信息技术有限公司 图像裁剪方法、装置、电子设备及存储介质
CN112954195A (zh) * 2021-01-27 2021-06-11 维沃移动通信有限公司 对焦方法、装置、电子设备及介质
CN112949401B (zh) * 2021-02-01 2024-03-26 浙江大华技术股份有限公司 图像分析方法、装置、设备及计算机存储介质
CN112927241A (zh) * 2021-03-08 2021-06-08 携程旅游网络技术(上海)有限公司 图片截取和缩略图生成方法、系统、设备及储存介质
CN113570626B (zh) * 2021-09-27 2022-01-07 腾讯科技(深圳)有限公司 图像裁剪方法、装置、计算机设备及存储介质
CN114170667A (zh) * 2021-12-15 2022-03-11 深圳市酷开软件技术有限公司 一种海报设计元素确定方法、装置、设备和存储介质
CN114067370B (zh) * 2022-01-17 2022-06-21 北京新氧科技有限公司 一种脖子遮挡检测方法、装置、电子设备及存储介质
CN114742791A (zh) * 2022-04-02 2022-07-12 深圳市国电科技通信有限公司 印刷电路板组装的辅助缺陷检测方法、装置及计算机设备
CN115146805A (zh) * 2022-05-19 2022-10-04 新瑞鹏宠物医疗集团有限公司 基于宠物鼻纹的宠物游乐园入园的方法以及相关装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101132480A (zh) * 2006-07-25 2008-02-27 富士胶片株式会社 图像修剪设备
US20140306999A1 (en) * 2013-04-11 2014-10-16 Samsung Electronics Co., Ltd. Objects in screen images
CN107545576A (zh) * 2017-07-31 2018-01-05 华南农业大学 基于构图规则的图像编辑方法
CN108776970A (zh) * 2018-06-12 2018-11-09 北京字节跳动网络技术有限公司 图像处理方法和装置
CN110136142A (zh) * 2019-04-26 2019-08-16 微梦创科网络科技(中国)有限公司 一种图像裁剪方法、装置、电子设备
CN110706150A (zh) * 2019-07-12 2020-01-17 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4837602B2 (ja) * 2007-03-12 2011-12-14 富士フイルム株式会社 画像トリミング装置および方法並びにプログラム
US8660351B2 (en) * 2011-10-24 2014-02-25 Hewlett-Packard Development Company, L.P. Auto-cropping images using saliency maps
CN103914689B (zh) * 2014-04-09 2017-03-15 百度在线网络技术(北京)有限公司 基于人脸识别的图片裁剪方法及装置
CN104361329B (zh) * 2014-11-25 2018-08-24 成都品果科技有限公司 一种基于人脸识别的照片裁剪方法及系统
CN105989572B (zh) * 2015-02-10 2020-04-24 腾讯科技(深圳)有限公司 图片处理方法及装置
CN107610131B (zh) * 2017-08-25 2020-05-12 百度在线网络技术(北京)有限公司 一种图像裁剪方法和图像裁剪装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101132480A (zh) * 2006-07-25 2008-02-27 富士胶片株式会社 图像修剪设备
US20140306999A1 (en) * 2013-04-11 2014-10-16 Samsung Electronics Co., Ltd. Objects in screen images
CN107545576A (zh) * 2017-07-31 2018-01-05 华南农业大学 基于构图规则的图像编辑方法
CN108776970A (zh) * 2018-06-12 2018-11-09 北京字节跳动网络技术有限公司 图像处理方法和装置
CN110136142A (zh) * 2019-04-26 2019-08-16 微梦创科网络科技(中国)有限公司 一种图像裁剪方法、装置、电子设备
CN110706150A (zh) * 2019-07-12 2020-01-17 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972369A (zh) * 2021-02-26 2022-08-30 北京小米移动软件有限公司 图像处理方法、装置及存储介质
CN113378617A (zh) * 2021-03-05 2021-09-10 睿魔智能科技(深圳)有限公司 一种图像处理、起立/坐下行为识别方法及系统
CN114580631A (zh) * 2022-03-04 2022-06-03 北京百度网讯科技有限公司 模型的训练方法、烟火检测方法、装置、电子设备及介质
CN114580631B (zh) * 2022-03-04 2023-09-08 北京百度网讯科技有限公司 模型的训练方法、烟火检测方法、装置、电子设备及介质

Also Published As

Publication number Publication date
CN110706150A (zh) 2020-01-17

Similar Documents

Publication Publication Date Title
WO2021008456A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN110502954B (zh) 视频分析的方法和装置
WO2020221012A1 (zh) 图像特征点的运动信息确定方法、任务执行方法和设备
WO2020019873A1 (zh) 图像处理方法、装置、终端及计算机可读存储介质
CN109829864B (zh) 图像处理方法、装置、设备及存储介质
CN109859102B (zh) 特效显示方法、装置、终端及存储介质
WO2022134632A1 (zh) 作品处理方法及装置
CN109302632B (zh) 获取直播视频画面的方法、装置、终端及存储介质
CN110839128B (zh) 拍照行为检测方法、装置及存储介质
WO2021114592A1 (zh) 视频降噪方法、装置、终端及存储介质
CN109285178A (zh) 图像分割方法、装置及存储介质
CN111753784A (zh) 视频的特效处理方法、装置、终端及存储介质
US11386586B2 (en) Method and electronic device for adding virtual item
CN109886208B (zh) 物体检测的方法、装置、计算机设备及存储介质
CN110290426B (zh) 展示资源的方法、装置、设备及存储介质
WO2021238564A1 (zh) 显示设备及其畸变参数确定方法、装置、系统及存储介质
CN111754386A (zh) 图像区域屏蔽方法、装置、设备及存储介质
CN111083513B (zh) 直播画面处理方法、装置、终端及计算机可读存储介质
CN110807769B (zh) 图像显示控制方法及装置
CN112565806A (zh) 虚拟礼物赠送方法、装置、计算机设备及介质
CN110189348B (zh) 头像处理方法、装置、计算机设备及存储介质
CN111586279B (zh) 确定拍摄状态的方法、装置、设备及存储介质
WO2022033272A1 (zh) 图像处理方法以及电子设备
CN110992268B (zh) 背景设置方法、装置、终端及存储介质
CN112381729A (zh) 图像处理方法、装置、终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20841193

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20841193

Country of ref document: EP

Kind code of ref document: A1