CN109448001B - Automatic picture clipping method - Google Patents

Automatic picture clipping method Download PDF

Info

Publication number
CN109448001B
CN109448001B CN201811255476.4A CN201811255476A CN109448001B CN 109448001 B CN109448001 B CN 109448001B CN 201811255476 A CN201811255476 A CN 201811255476A CN 109448001 B CN109448001 B CN 109448001B
Authority
CN
China
Prior art keywords
picture
processed
proportion
cutting
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811255476.4A
Other languages
Chinese (zh)
Other versions
CN109448001A (en
Inventor
郭志强
闫晓葳
赵振
展丽萍
王猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Century Kaiyuan Zhiyin Internet Technology Group Co ltd
Original Assignee
Century Kaiyuan Zhiyin Internet Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Century Kaiyuan Zhiyin Internet Technology Group Co ltd filed Critical Century Kaiyuan Zhiyin Internet Technology Group Co ltd
Priority to CN201811255476.4A priority Critical patent/CN109448001B/en
Publication of CN109448001A publication Critical patent/CN109448001A/en
Application granted granted Critical
Publication of CN109448001B publication Critical patent/CN109448001B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses an automatic picture cutting method, which comprises the following specific steps: calculating the picture proportion by utilizing the height and width information of the picture to be processed; setting the output size of the picture to be processed, and determining the output proportion of the picture to be processed according to the output size; determining and calculating the size of the cropping frame according to the picture proportion and the output proportion; judging whether the picture proportion of the picture to be processed and the required output proportion are appropriate or not; predicting the category of the picture to be processed, detecting a salient region of the picture to be processed, and extracting a salient feature map of the picture to be processed; distinguishing a more important region and a less important region according to the significant feature graph to obtain a minimum circumscribed rectangle of the more important region; and judging whether the minimum circumscribed rectangle is covered by the cutting frame, executing cutting or blank leaving processing, and outputting a cutting result. The invention has the advantages of automatically cutting the picture, being suitable for outputting the cutting result with any size and automatically judging whether the picture needs to be cut or to be blank, and carrying out blank processing on the picture which can not be cut.

Description

Automatic picture clipping method
Technical Field
The invention relates to the technical field of deep learning and computer vision, in particular to an automatic picture clipping method.
Background
In the field of image printing, when the length-width ratio of a picture is different from the length-width ratio of the actually printed picture, the picture needs to be preprocessed. The pretreatment principle is as follows: firstly, trying to cut the picture according to the printing size, judging whether the cut picture maintains the figure integrity, the mark building integrity, the special character integrity, the composition integrity and the like of the original picture, if the integrity of the original picture can be maintained, cutting can be performed, otherwise, blank processing is needed, wherein the blank refers to that the original picture is not cut, and white pixels are filled in the vacant positions of photographic paper during printing. All pictures are preferably clipped, and the picture can be left blank only if the integrity of the pictures is damaged after clipping.
In the process, a large amount of manpower is consumed in the manual cutting mode, and meanwhile, the manual cutting cost is increased along with the increase of the pictures to be printed. Because picture clipping is a highly subjective task, and the existing rules hardly consider all influencing factors, an effective automatic picture clipping method can reduce manual operation to a great extent on one hand and can improve the clipping speed of pictures on the other hand.
The existing automatic picture clipping methods mainly comprise three methods. The first is an automatic cropping method (CN 104392202 a) based on picture recognition, which performs face recognition first, and performs background recognition if there is no face in the picture, and then finds out the main body part of the picture that needs to be preserved. The second method (CN 106650737 a) extracts an aesthetic response map and a gradient energy map of an image to be cropped, extracts candidate cropped images to be densely cropped, screens the candidate cropped images based on the aesthetic response map, estimates composition scores of the screened candidate cropped images based on the aesthetic response map and the gradient energy map, and determines the candidate cropped image with the highest score as the cropped image. The third method is a picture automatic clipping method (CN 108154464 a) based on reinforcement learning, which uses a reinforcement learning model to perform feature extraction on a current clipping window to obtain local features, and splices the local features with global features of a picture to be clipped to obtain a new feature vector, uses the new feature vector as current observation information, uses historical observation information obtained by the reinforcement learning model and the current observation information to be combined as current state representation, and performs a clipping action on the picture to be clipped in a serialization manner according to a clipping strategy and the current state representation to obtain a clipping result.
The three methods are only suitable for the application field of the method, but are not suitable for the picture printing field because the contents of the pictures to be printed are complex and various, and the pictures containing the faces possibly only contain partial faces or have hair ornaments, hats and the like on the heads; characters, dates, human gestures, landmark buildings and the like can exist in the picture, the areas can not be cut during picture printing, and the cutting position cannot be correctly determined only by human face detection or background recognition; the second and third methods have no requirement on the size of the finally generated cutting frame, can be adjusted at will according to the detection result, and do not meet the requirement on cutting the picture in the field of image development.
Disclosure of Invention
The invention aims to solve the problems and provides an automatic picture clipping method which has the advantages of automatically clipping pictures, being suitable for outputting clipping results of any size and automatically judging whether the pictures are to be clipped or left white and carrying out left white processing on pictures which cannot be clipped.
In order to achieve the purpose, the invention adopts the following technical scheme:
an automatic picture clipping method comprises the following specific steps:
calculating the picture proportion by utilizing the height and width information of the picture to be processed;
setting the output size of the picture to be processed, and determining the output proportion of the picture to be processed according to the output size;
determining and calculating the size of the cropping frame according to the picture proportion and the output proportion;
judging whether the picture proportion of the picture to be processed and the required output proportion are proper or not, if not, executing reduction or enlargement operation on the picture to be processed, and if so, not executing the operation;
predicting the category of the picture to be processed by using the trained picture classification model;
detecting a salient region of the picture to be processed by using a trained saliency prediction model according to the category of the picture to be processed, and extracting a saliency characteristic map of the picture to be processed;
distinguishing a more important region and a less important region according to the significant feature map of the picture to be processed to obtain a minimum circumscribed rectangle of the more important region;
judging whether the minimum circumscribed rectangle is covered by the cutting frame or not, and executing cutting or blank leaving processing according to a judgment result; if the clipping processing is executed, the picture salient feature map is scanned by using a clipping frame, and the position of the clipping frame in the picture to be clipped is further accurately adjusted;
and outputting a cutting result.
The specific method for judging whether the picture proportion of the picture to be processed and the required output proportion are appropriate is as follows:
setting the fluctuation range of the output proportion, judging whether the picture proportion of the picture to be processed is in the fluctuation range of the output proportion, if the picture proportion of the picture to be processed is not in the fluctuation range of the output proportion, reducing or amplifying the picture to be processed according to the picture proportion of the picture to be processed, and uniformly scaling the longer sides of the picture to be processed into a fixed size, otherwise, not executing the operation of reducing or amplifying.
The specific method for distinguishing the more important region from the less important region according to the significant feature map of the picture to be processed comprises the following steps:
and setting an image binarization threshold, wherein the area larger than the set threshold is a more important area, and the area is a less important area.
And judging whether the minimum circumscribed rectangle is covered by the cutting frame, wherein the principle of executing cutting or leaving white processing is as follows:
if the minimum circumscribed rectangle is not covered by the cutting frame, the picture to be processed is not cut, and the white area is filled on the left side, the right side or the upper side and the lower side of the picture to be processed by comparing the picture proportion and the washing proportion to obtain the output size, namely the white leaving operation; and if the minimum circumscribed rectangle is covered by the cutting frame, executing cutting operation.
When the cutting operation is executed, the method for further accurately adjusting the position of the cutting frame in the picture to be cut comprises the following steps:
scanning a salient feature map of the picture to be processed by using the cropping frame;
scanning each position of the salient feature map by the cutting frame to ensure that the more important area is completely contained in the cutting frame;
calculating a saliency Score of the cut frame at each position using formula (1);
Figure GDA0001863036850000031
under the condition that the cutting frame contains the more important area, w1 and w2 represent the transverse variable range of the cutting frame; h1, h2 represents the longitudinal variable range of the crop box; i and j respectively represent the horizontal and vertical coordinate positions of the image to be processed;
and determining the position with the highest score as the final position of the cutting frame.
The invention has the beneficial effects that:
the method is used for cutting the picture, manual intervention is not needed, the picture can be automatically cut, and the method is suitable for outputting the cutting result of any size;
automatically judging whether the picture needs to be cut or left white, and carrying out left white processing on the picture which cannot be cut;
according to the picture contents with different styles, different cutting strategies are adopted to meet the picture cutting requirements;
the image classification model based on deep learning and the significance detection model are effectively combined, and the problem that the cutting rule is difficult to make in a unified mode due to the fact that the content of the image is rich is solved.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a modeling flow diagram of the present invention;
FIG. 3 is a schematic diagram of cropping of a picture to be processed;
FIG. 4 is a schematic illustration of a blank of a picture to be processed;
FIG. 5 shows an original picture (one) in the example;
FIG. 6 is a salient feature diagram of the original picture (I);
FIG. 7 shows an original picture (II) in the example;
FIG. 8 is a significant characteristic diagram of the original picture (II);
FIG. 9 shows an original picture (III) in the example;
fig. 10 is a salient feature diagram of the original picture (iii).
Detailed Description
The invention is further described with reference to the following figures and examples.
The image classification refers to distinguishing targets of different categories according to different characteristics reflected in image information; the saliency detection refers to extracting the salient region in the image by simulating the visual characteristics of a human through an algorithm, namely when the salient region faces a scene, automatically processing the interested regions and selectively ignoring the non-interested regions, wherein the interested regions are called as the saliency regions. Respectively collecting training data sets and labeling by adopting a supervised learning method, designing a network structure to learn model parameters from the prepared training data sets, and predicting a result according to the model when new data comes.
As shown in fig. 2, the specific method for constructing the image classification model includes:
s1, selecting picture data, manually screening and classifying, constructing a data set, and manually determining the number of categories and the picture types contained in each category;
s2, constructing a picture classification model by taking a VGG16 model as a basic network model classification framework; VGG16 networks are classical classification networks well known in the art;
s3, training the constructed picture classification model by using the constructed data set, and calculating the probability of each category by using a softmax function, wherein the softmax function can be expressed as:
Figure GDA0001863036850000041
wherein z isiRepresenting the output of the ith neuron of the last layer of the image classification model, K is the number of prediction categories, and p (i) represents the probability of predicting the image to be processed into the ith category; z is a radical ofkRepresenting the last layer of the picture classification modelThe output of k neurons;
and S4, stopping training and outputting the picture classification model when the softmax function loss of the picture classification model is reduced to a set image binarization threshold value.
The specific method for constructing the significance prediction model comprises the following steps:
s1, selecting picture data, manually screening and classifying, constructing a data set, and manually defining a significant area of a picture according to the category to which the picture belongs;
s2, marking the picture at the pixel level manually according to different categories;
s3, constructing a significance prediction model by taking a VGG16 model as a basic network model classification framework; VGG16 networks are classical classification networks well known in the art;
s4, training the designed significant detection network by using the constructed data set, and determining a significant area of the picture by using a softmax function, wherein the softmax function can be expressed as:
Figure GDA0001863036850000051
wherein z isiRepresenting the output of the ith position neuron of the last layer of the significance prediction model, wherein K is the number of image pixels, and p (i) represents the probability that the ith position pixel of the picture to be processed is predicted as a significant region;
and S5, reducing the softmax function loss of the model to be subjected to significance prediction to a set image binarization threshold value, stopping training and outputting the significance prediction model.
As shown in fig. 1, an automatic picture cropping method specifically includes the steps of:
calculating the picture proportion by utilizing the height and width information of the picture to be processed;
setting the output size of the picture to be processed, and determining the output proportion of the picture to be processed according to the output size;
determining and calculating the size of the cropping frame according to the picture proportion and the output proportion;
judging whether the picture proportion of the picture to be processed and the required output proportion are proper or not, setting the fluctuation range of the output proportion, judging whether the picture proportion of the picture to be processed is in the fluctuation range of the output proportion or not, if the picture proportion of the picture to be processed is not in the fluctuation range of the output proportion, reducing or amplifying the picture to be processed according to the picture proportion of the picture to be processed, and uniformly scaling the longer sides to be fixed size, otherwise, not executing the operation of reducing or amplifying;
predicting the category of the picture to be processed by using the trained picture classification model;
according to the category of the picture to be processed, detecting a salient region of the picture to be processed by using a trained saliency prediction model, and extracting a saliency map of the picture to be processed, as shown in fig. 5 to 10;
distinguishing a more important region and a less important region according to the significant feature map of the picture to be processed, setting an image binarization threshold value by adopting an image binarization method, wherein the region larger than the set threshold value is the more important region, and otherwise, the region is the less important region, and obtaining a minimum circumscribed rectangle of the more important region; the minimum circumscribed rectangle refers to the maximum range of a plurality of two-dimensional shapes, namely, a rectangle with the lower boundary determined by the maximum abscissa, the minimum abscissa, the maximum ordinate and the minimum ordinate of each vertex of a given two-dimensional shape, and the image binarization threshold is set by using an image binarization method to be the prior art;
judging whether the minimum circumscribed rectangle is covered by the cutting frame or not, and executing cutting or blank leaving processing according to a judgment result; if the clipping processing is executed, the picture salient feature map is scanned by using a clipping frame, and the position of the clipping frame in the picture to be clipped is further accurately adjusted; if the minimum circumscribed rectangle is not covered by the cutting frame, the picture to be processed is not cut, and the white areas are filled on the left side, the right side or the upper side and the lower side of the picture to obtain the output size by comparing the picture proportion with the washing proportion, namely, the white leaving operation is carried out; if the minimum circumscribed rectangle is covered by the trimming frame, performing trimming operation, as shown in fig. 3 and 4;
and outputting a cutting result.
When the cutting operation is executed, the method for further accurately adjusting the position of the cutting frame in the picture to be cut comprises the following steps:
scanning a salient feature map of the picture to be processed by using the cropping frame;
scanning each position of the salient feature map by the cutting frame to ensure that the more important area is completely contained in the cutting frame;
calculating a saliency Score of the cut frame at each position using formula (1);
Figure GDA0001863036850000061
under the condition that the cutting frame contains the more important area, w1 and w2 represent the transverse variable range of the cutting frame; h1, h2 represents the longitudinal variable range of the crop box; i and j respectively represent the horizontal and vertical coordinate positions of the image to be processed;
and determining the position with the highest score as the final position of the cutting frame.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (3)

1. An automatic picture clipping method is characterized by comprising the following specific steps:
calculating the picture proportion by utilizing the height and width information of the picture to be processed;
setting the output size of the picture to be processed, and determining the output proportion of the picture to be processed according to the output size;
determining and calculating the size of the cropping frame according to the picture proportion and the output proportion;
judging whether the picture proportion of the picture to be processed and the required output proportion are proper or not, if not, executing reduction or enlargement operation on the picture to be processed, and if so, not executing the operation;
predicting the category of the picture to be processed by using the trained picture classification model;
detecting a salient region of the picture to be processed by using a trained saliency prediction model according to the category of the picture to be processed, and extracting a saliency characteristic map of the picture to be processed;
distinguishing a more important region and a less important region according to the significant feature map of the picture to be processed to obtain a minimum circumscribed rectangle of the more important region;
judging whether the minimum circumscribed rectangle is covered by the cutting frame or not, and executing cutting or blank leaving processing according to a judgment result; if the clipping processing is executed, the picture salient feature map is scanned by using a clipping frame, and the position of the clipping frame in the picture to be clipped is further accurately adjusted;
outputting a cutting result;
the specific method for distinguishing the more important region from the less important region according to the significant feature map of the picture to be processed comprises the following steps:
setting an image binarization threshold value, wherein a region larger than the set threshold value is a more important region, and conversely, the region is a less important region;
and judging whether the minimum circumscribed rectangle is covered by the cutting frame, wherein the principle of executing cutting or leaving white processing is as follows:
if the minimum circumscribed rectangle is not covered by the cutting frame, the picture to be processed is not cut, and the white area is filled on the left side, the right side or the upper side and the lower side of the picture to be processed by comparing the picture proportion and the washing proportion to obtain the output size, namely the white leaving operation; if the minimum external rectangle is covered by the cutting frame, executing cutting operation;
the specific method for constructing the image classification model comprises the following steps:
s1, selecting picture data, manually screening and classifying, constructing a data set, and manually determining the number of categories and the picture types contained in each category;
s2, constructing a picture classification model by taking a VGG16 model as a basic network model classification framework; VGG16 networks are classical classification networks well known in the art;
s3, training the constructed picture classification model by using the constructed data set, and calculating the probability of each category by using a softmax function, wherein the softmax function can be expressed as:
Figure FDA0003140629170000021
wherein z isiRepresenting the output of the ith neuron of the last layer of the image classification model, K is the number of prediction categories, and p (i) represents the probability of predicting the image to be processed into the ith category; z is a radical ofkRepresenting the output of the kth neuron of the last layer of the image classification model;
s4, when the softmax function loss of the picture classification model is reduced to a set image binarization threshold value, stopping training and outputting the picture classification model;
the specific method for constructing the significance prediction model comprises the following steps:
selecting picture data, manually screening and classifying, constructing a data set, and manually defining a salient region of a picture according to the category to which the picture belongs;
according to different categories, marking the picture at pixel level manually;
thirdly, constructing a significance prediction model by taking the VGG16 model as a basic network model classification framework; VGG16 networks are classical classification networks well known in the art;
training the designed significant detection network by using the constructed data set, and determining a significant area of the picture by using a softmax function, wherein the softmax function can be expressed as:
Figure FDA0003140629170000022
wherein z isiRepresenting the output of the ith position neuron of the last layer of the significance prediction model, wherein K is the number of image pixels, and p (i) represents the probability that the ith position pixel of the picture to be processed is predicted as a significant region;
and step five, when the softmax function loss of the significance prediction model is reduced to a set image binarization threshold value, stopping training and outputting the significance prediction model.
2. The method according to claim 1, wherein the specific method for determining whether the picture ratio of the picture to be processed and the required output ratio are appropriate is:
setting the fluctuation range of the output proportion, judging whether the picture proportion of the picture to be processed is in the fluctuation range of the output proportion, if the picture proportion of the picture to be processed is not in the fluctuation range of the output proportion, reducing or amplifying the picture to be processed according to the picture proportion of the picture to be processed, and uniformly scaling the longer sides of the picture to be processed into a fixed size, otherwise, not executing the operation of reducing or amplifying.
3. The automatic picture cropping method of claim 1, wherein when performing the cropping operation, the method for further precisely adjusting the position of the cropping frame in the picture to be cropped is as follows:
scanning a salient feature map of the picture to be processed by using the cropping frame;
scanning each position of the salient feature map by the cutting frame to ensure that the more important area is completely contained in the cutting frame;
calculating a saliency Score of the cut frame at each position using formula (1);
Figure FDA0003140629170000031
under the condition that the cutting frame contains the more important area, w1 and w2 represent the transverse variable range of the cutting frame; h1, h2 represents the longitudinal variable range of the crop box; i and j respectively represent the horizontal and vertical coordinate positions of the image to be processed;
and determining the position with the highest score as the final position of the cutting frame.
CN201811255476.4A 2018-10-26 2018-10-26 Automatic picture clipping method Active CN109448001B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811255476.4A CN109448001B (en) 2018-10-26 2018-10-26 Automatic picture clipping method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811255476.4A CN109448001B (en) 2018-10-26 2018-10-26 Automatic picture clipping method

Publications (2)

Publication Number Publication Date
CN109448001A CN109448001A (en) 2019-03-08
CN109448001B true CN109448001B (en) 2021-08-27

Family

ID=65548462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811255476.4A Active CN109448001B (en) 2018-10-26 2018-10-26 Automatic picture clipping method

Country Status (1)

Country Link
CN (1) CN109448001B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919156B (en) * 2019-03-13 2022-07-19 网易传媒科技(北京)有限公司 Training method, medium and device of image cropping prediction model and computing equipment
CN110456960B (en) * 2019-05-09 2021-10-01 华为技术有限公司 Image processing method, device and equipment
WO2020232672A1 (en) * 2019-05-22 2020-11-26 深圳市大疆创新科技有限公司 Image cropping method and apparatus, and photographing apparatus
CN110782392B (en) * 2019-07-12 2023-11-14 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN110580678B (en) * 2019-09-10 2023-06-20 北京百度网讯科技有限公司 Image processing method and device
CN110708606A (en) * 2019-09-29 2020-01-17 新华智云科技有限公司 Method for intelligently editing video
CN110853068B (en) * 2019-09-30 2022-06-17 荣耀终端有限公司 Picture processing method and device, electronic equipment and readable storage medium
CN111311617A (en) * 2020-03-26 2020-06-19 北京奇艺世纪科技有限公司 Method, device and equipment for cutting dynamic graph and storage medium
CN111461968B (en) * 2020-04-01 2023-05-23 抖音视界有限公司 Picture processing method, device, electronic equipment and computer readable medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914689A (en) * 2014-04-09 2014-07-09 百度在线网络技术(北京)有限公司 Picture cropping method and device based on face recognition
CN103996186A (en) * 2014-04-29 2014-08-20 小米科技有限责任公司 Image cutting method and image cutting device
CN104063444A (en) * 2014-06-13 2014-09-24 百度在线网络技术(北京)有限公司 Method and device for generating thumbnail
CN106778757A (en) * 2016-12-12 2017-05-31 哈尔滨工业大学 Scene text detection method based on text conspicuousness
CN107240105A (en) * 2017-06-05 2017-10-10 深圳市茁壮网络股份有限公司 A kind of image cropping method and device
CN108108669A (en) * 2017-12-01 2018-06-01 中国科学院重庆绿色智能技术研究院 A kind of facial characteristics analytic method based on notable subregion

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11172005B2 (en) * 2016-09-09 2021-11-09 Nokia Technologies Oy Method and apparatus for controlled observation point and orientation selection audiovisual content
CN108154515A (en) * 2017-12-27 2018-06-12 三星电子(中国)研发中心 Picture shows method and apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914689A (en) * 2014-04-09 2014-07-09 百度在线网络技术(北京)有限公司 Picture cropping method and device based on face recognition
CN103996186A (en) * 2014-04-29 2014-08-20 小米科技有限责任公司 Image cutting method and image cutting device
CN104063444A (en) * 2014-06-13 2014-09-24 百度在线网络技术(北京)有限公司 Method and device for generating thumbnail
CN106778757A (en) * 2016-12-12 2017-05-31 哈尔滨工业大学 Scene text detection method based on text conspicuousness
CN107240105A (en) * 2017-06-05 2017-10-10 深圳市茁壮网络股份有限公司 A kind of image cropping method and device
CN108108669A (en) * 2017-12-01 2018-06-01 中国科学院重庆绿色智能技术研究院 A kind of facial characteristics analytic method based on notable subregion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Salient Object Detection for SearchedWeb Images via Global Saliency;Peng Wang等;《IEEE Computer Society Conference on Computer Vision and Pattern Recognition》;20120630;3194-3201 *
基于DCT域视觉显著性检测的图像缩放算法;罗雅丹等;《计算机应用研究》;20150906;第33卷(第1期);296-299+320 *

Also Published As

Publication number Publication date
CN109448001A (en) 2019-03-08

Similar Documents

Publication Publication Date Title
CN109448001B (en) Automatic picture clipping method
CN108960245B (en) Tire mold character detection and recognition method, device, equipment and storage medium
CN111640125B (en) Aerial photography graph building detection and segmentation method and device based on Mask R-CNN
CN104050471B (en) Natural scene character detection method and system
EP3819859B1 (en) Sky filter method for panoramic images and portable terminal
CN103049763B (en) Context-constraint-based target identification method
CN111914698B (en) Human body segmentation method, segmentation system, electronic equipment and storage medium in image
WO2022012110A1 (en) Method and system for recognizing cells in embryo light microscope image, and device and storage medium
CN113673338B (en) Automatic labeling method, system and medium for weak supervision of natural scene text image character pixels
CN108181316B (en) Bamboo strip defect detection method based on machine vision
CN109840483B (en) Landslide crack detection and identification method and device
CN110533583B (en) Self-adaptive image augmentation system based on cervical fluid-based cells
CN110472628B (en) Improved Faster R-CNN network floater detection method based on video characteristics
CN107945200A (en) Image binaryzation dividing method
CN110598698B (en) Natural scene text detection method and system based on adaptive regional suggestion network
CN103198479A (en) SAR image segmentation method based on semantic information classification
CN111696079A (en) Surface defect detection method based on multi-task learning
CN111540203B (en) Method for adjusting green light passing time based on fast-RCNN
CN111612802A (en) Re-optimization training method based on existing image semantic segmentation model and application
CN111414855A (en) Telegraph pole sign target detection and identification method based on end-to-end regression model
CN115588208A (en) Full-line table structure identification method based on digital image processing technology
CN113807173A (en) Construction and labeling method and application system of lane line data set
CN111627033B (en) Method, equipment and computer readable storage medium for dividing difficult sample instance
CN113887381A (en) Lightweight satellite cloud chart neural network training method and rainfall detection method
CN112926694A (en) Method for automatically identifying pigs in image based on improved neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Floor 23, building 1, Hisense Chuangzhi Valley, 2116 Fenghuang Road, hi tech Zone, Jinan City, Shandong Province

Applicant after: SHANDONG SHIJI KAIYUAN ELECTRONIC COMMERCE GROUP Co.,Ltd.

Address before: 250101 Shandong Province Jinan High-tech Zone Tianchen Road 1251 Building 1 Floor

Applicant before: SHANDONG SHIJI KAIYUAN ELECTRONIC COMMERCE GROUP Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 250101 23 / F, building 1, Hisense Chuangzhi Valley, 2116 Fenghuang Road, high tech Zone, Jinan City, Shandong Province

Applicant after: Century Kaiyuan Zhiyin Internet Technology Group Co.,Ltd.

Address before: 250101 23 / F, building 1, Hisense Chuangzhi Valley, 2116 Fenghuang Road, high tech Zone, Jinan City, Shandong Province

Applicant before: SHANDONG SHIJI KAIYUAN ELECTRONIC COMMERCE GROUP Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant