CN108764371A - Image processing method, device, computer readable storage medium and electronic equipment - Google Patents
Image processing method, device, computer readable storage medium and electronic equipment Download PDFInfo
- Publication number
- CN108764371A CN108764371A CN201810587034.3A CN201810587034A CN108764371A CN 108764371 A CN108764371 A CN 108764371A CN 201810587034 A CN201810587034 A CN 201810587034A CN 108764371 A CN108764371 A CN 108764371A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- tag along
- along sort
- pending
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
This application involves a kind of image processing method, device, computer readable storage medium and electronic equipment, the method includes:Obtain the image collection for including at least one pending image;The pending image in described image set is traversed, the pending image is identified to obtain image classification label;The amount of images for counting the corresponding pending image of each image classification label obtains target image tag along sort according to described image quantity from described image tag along sort;Target user's attribute is obtained according to the target image tag along sort.Above-mentioned image processing method, device, computer readable storage medium and electronic equipment can improve the accuracy of image procossing.
Description
Technical field
This application involves field of computer technology, more particularly to a kind of image processing method, device, computer-readable deposit
Storage media and electronic equipment.
Background technology
Smart machine can shoot image by camera, can also obtain figure by the transmission with other smart machines
Picture.The scene of image taking can have very much, such as seabeach, snow scenes, night scene etc..There is likely to be many targets in shooting image
Object, such as automobile, people, animal etc..Under normal conditions, the image shot under different scenes has different color characteristics, different
Target object performance characteristic it is also different.
Invention content
A kind of image processing method of the embodiment of the present application offer, device, computer readable storage medium and electronic equipment, can
To improve the accuracy of image procossing.
A kind of image processing method, the method includes:
Obtain the image collection for including at least one pending image;
The pending image in described image set is traversed, the pending image is identified to obtain image classification mark
Label;
The amount of images for counting the corresponding pending image of each image classification label, according to described image quantity from described
Target image tag along sort is obtained in image classification label;
Target user's attribute is obtained according to the target image tag along sort.
A kind of image processing apparatus, described device include:
Image collection module, for obtaining the image collection for including at least one pending image;
Scene Recognition module carries out the pending image for traversing the pending image in described image set
Identification obtains image classification label;
Quantity statistics module, the amount of images for counting the corresponding pending image of each image classification label, according to
Described image quantity obtains target image tag along sort from described image tag along sort;
Attribute acquisition module, for obtaining target user's attribute according to the target image tag along sort.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
Following steps are realized when row:
Obtain the image collection for including at least one pending image;
The pending image in described image set is traversed, the pending image is identified to obtain image classification mark
Label;
The amount of images for counting the corresponding pending image of each image classification label, according to described image quantity from described
Target image tag along sort is obtained in image classification label;
Target user's attribute is obtained according to the target image tag along sort.
A kind of electronic equipment, including memory and processor store computer-readable instruction in the memory, described
When instruction is executed by the processor so that the processor executes following steps:
Obtain the image collection for including at least one pending image;
The pending image in described image set is traversed, the pending image is identified to obtain image classification mark
Label;
The amount of images for counting the corresponding pending image of each image classification label, according to described image quantity from described
Target image tag along sort is obtained in image classification label;
Target user's attribute is obtained according to the target image tag along sort.
Above-mentioned image processing method, device, computer readable storage medium and electronic equipment, can to pending image into
Row identification, obtains the image classification label of reflection image taking scene.According to the corresponding pending image of image classification label
Quantity obtains target image tag along sort, the association attributes of user can be predicted according to target image tag along sort.According to target
Image classification label can accurately speculate the shooting custom of user, to predict the association attributes of user, improve at image
The accuracy of reason.
Description of the drawings
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
Obtain other attached drawings according to these attached drawings.
Fig. 1 is the applied environment figure of image processing method in one embodiment;
Fig. 2 is the flow chart of image processing method in one embodiment;
Fig. 3 is the flow chart of image processing method in another embodiment;
Fig. 4 is the displaying schematic diagram of the background area and foreground target in pending image in one embodiment;
Fig. 5 is the model schematic to identifying display foreground and background in one embodiment;
Fig. 6 is the model schematic that display foreground and background are identified in another embodiment;
Fig. 7 is the schematic diagram that image classification label is generated in one embodiment;
Fig. 8 is the flow chart of image processing method in another embodiment;
Fig. 9 is the flow chart of image processing method in another embodiment;
Figure 10 is the structural schematic diagram of image processing apparatus in one embodiment;
Figure 11 is the schematic diagram of image processing circuit in one embodiment.
Specific implementation mode
It is with reference to the accompanying drawings and embodiments, right in order to make the object, technical solution and advantage of the application be more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and
It is not used in restriction the application.
It is appreciated that term " first " used in this application, " second " etc. can be used to describe herein various elements,
But these elements should not be limited by these terms.These terms are only used to distinguish first element and another element.Citing comes
It says, in the case where not departing from scope of the present application, the first client can be known as the second client, and similarly, can incite somebody to action
Second client is known as the first client.First client and the second client both client, but it is not same visitor
Family end.
Fig. 1 is the applied environment figure of image processing method in one embodiment.As shown in Figure 1, the application environment includes
Terminal 102 and server 104.Can transmit pending image between terminal 102 and server 104, and to pending image into
Row identifying processing.In one embodiment, terminal 102 can store several pending images, then will be according to pending figure
As the image collection generated is sent to server 104.The algorithm model handled image is stored in server 104, so
The pending image in image collection is traversed afterwards, and pending image is identified by the algorithm model to obtain image classification mark
Label;The amount of images for counting the corresponding pending image of each image classification label, according to amount of images from image classification label
Middle acquisition target image tag along sort;Target user's attribute is obtained according to target image tag along sort.Finally by the target of acquisition
User property is sent to terminal 102, and terminal 102 can carry out pending image according to the target user's attribute received follow-up
Processing.Wherein, terminal 102 is to be in computer network outermost, is mainly used for inputting user information and output processing knot
The electronic equipment of fruit, such as can be PC, mobile terminal, personal digital assistant, wearable electronic etc..Server
104 be while the equipment for providing the service of calculating for responding service request, such as can be one or multiple stage computers.?
Terminal 102 or server 104 can also be only included in other embodiment provided by the present application, in above application environment, herein not
It limits.
Fig. 2 is the flow chart of image processing method in one embodiment.As shown in Fig. 2, the image processing method includes step
Rapid 202 to step 208.Wherein:
Step 202, the image collection for including at least one pending image is obtained.
Image collection refers to the set being made of one or more pending image, and pending image can pass through electronics
What the camera of equipment obtained, can also be to be obtained from other electronic equipments, can also be and is downloaded by network, herein
It does not limit.For example, can install camera on electronic equipment, electronic equipment is referred to when detecting shooting instruction by shooting
Control camera is enabled to acquire pending image.Electronic equipment can immediately be handled image after getting image,
Image can also be uniformly stored in a file, in this document folder the image that stores reach certain amount and then
The image of storage is uniformly handled.Electronic equipment can store the image of acquisition into photograph album, when what is stored in photograph album
When image is more than certain amount, triggering handles the image in photograph album.
Step 204, the pending image in image collection is traversed, pending image is identified to obtain image classification mark
Label.
In one embodiment, it after electronic equipment gets image collection, can traverse pending in image collection
Image is successively handled pending image.Image classification label can be used for the classification of the photographed scene of tag image, treat
Processing image is identified to obtain image classification label, can be specifically that the background area of image is identified, obtain image
Tag along sort.
It is understood that generally all containing multiple objects in the scene of shooting image.For example, shooting outdoor scene
When, it can include generally pedestrian, blue sky, sandy beach, building etc. in image, when shooting indoor scene, generally can in image
Including the objects such as furniture appliance, office appliance.Generally may include foreground target and background area, foreground mesh in pending image
Mark refers to subject goal more outstanding in image, is the object that user compares concern, and background area refers to that foreground is removed in image
Region except target.
Electronic equipment can be detected the background area in pending image, detect that background area identifies the back of the body later
Which scene classification is scene area particularly belong to.Electronic equipment can pre-set the scene classification of background area, then pass through
Which preset scene classification preset sorting algorithm identification background area particularly belongs to.For example, can be by background area point
For scenes such as seabeach, snow scenes, night scene, blue sky, interiors, after background area is identified, it is corresponding background area can be obtained
Scene classification.According to the recognition result of background area, so that it may to obtain the image classification label of pending image.
Pending image in image collection can carry out uniquely tagged by image identification, obtain the figure of pending image
It, thus can be according to this as after tag along sort, the correspondence between image identification and image classification label can be established
Correspondence determines the corresponding scene classification of each pending image.For example, the image identification of pending image is " Pic_01 ",
Corresponding image classification label is " snow scenes ", is shot then being assured that image " Pic_01 " is write in snow scenes.
Step 206, the amount of images for counting the corresponding pending image of each image classification label, according to amount of images from
Target image tag along sort is obtained in image classification label.
After each pending image in image collection is identified, each image classification mark can be counted
Sign the amount of images of corresponding pending image.The more image classification label of correspondence image quantity, illustrates that user compares deviation
It is shot under the corresponding scene of image classification label.Target image point is obtained from image classification label according to amount of images
Class label, target image tag along sort can reflect the shooting custom of user, then can be according to target image tag along sort
To determine the association attributes of user.For example, image classification label of the amount of images more than first threshold can be obtained as first
Target image tag along sort, amount of images less than second threshold image classification label as the second target image tag along sort,
First threshold is more than second threshold.So first object image classification label can reflect that user compares the shooting field of preference
Scape, the second target image tag along sort can reflect the fewer photographed scene of user.
Step 208, target user's attribute is obtained according to target image tag along sort.
In embodiment provided by the present application, user property can be, but not limited to be that age, occupation, gender, hobby etc. can
Indicate the parameter of user's habit and build-in attribute.Target image tag along sort can reflect the shooting custom of user, therefore can be with
The attribute of user is predicted according to target image tag along sort.For example, the most image classification label of amount of images can be obtained
As target image tag along sort, according to the most image classification label, that is, target image tag along sort of the amount of images
The scene that identification user often shoots, to predict the user belongs to which age bracket and gender.
The image processing method that above-described embodiment provides, can be identified pending image, obtain reflection image and clap
Take the photograph the image classification label of scene.According to the quantity of the corresponding pending image of image classification label, target image classification is obtained
Label can predict the association attributes of user according to target image tag along sort.It can be accurate according to target image tag along sort
Supposition user shooting custom, to predict the association attributes of user, improve the accuracy of image procossing.
Fig. 3 is the flow chart of image processing method in another embodiment.As shown in figure 3, the image processing method includes
Step 302 is to step 314.Wherein:
Step 302, the image collection for including at least one pending image is obtained.
In one embodiment, according to the pending image prediction user property in image collection, general pending image
Quantity it is more, the accuracy for the user property predicted is higher.However the number for the pending image for including in image collection
Amount is more, bigger to the memory consumed when image procossing, takes also long.Electronic equipment to image handled when
It waits, can be automatic trigger, can also be that user triggers manually.The condition that an automatic trigger can be preset, works as satisfaction
When automatic trigger condition, step 302 is executed.For example, when the quantity of newer image in electronic equipment reaches preset quantity, open
The image for beginning to obtain storage generates image collection, and starts to handle image.Or when not reaching given time, start
It obtains and handles image collection.
Since the image of electronic equipment storage may be by what different modes obtained, for instance it can be possible that user is by taking the photograph
As head is shot, it may be possible to be downloaded on network, it is also possible to what good friend sent.And the image that the mode such as screenshotss generates
In, it is understood that there may be the more image that can not reflect the photographed scene for generating image is difficult to the shooting of reflection user in this way
Custom, is difficult to accurately predict user property.So when generating image collection, it can not have to obtain in electronic equipment
All images of storage, and only fetching portion image is handled.Step 302 is particularly as may include:From preset file
At least pending image that path obtains, and image collection is generated according to the pending image of acquisition.Preset file road
Diameter can only store user by taking the photograph for storing in the image that can be used for accurately identifying user property, such as preset file path
The image shot as head.
Step 304, the pending image in image collection is traversed, detects the foreground target in pending image, and obtain
The corresponding target area of foreground target.
After being detected to the foreground target in pending image, foreground target can waited locating by rectangle frame
Shared region in reason image is marked, and can also be directly marked by the edge of foreground target.Pending image
It is the two-dimensional pixel matrix being made of several pixels, electronic equipment can examine the foreground target in pending image
It surveys, the foreground target detected is made of pixel some or all of in pending image.Detecting foreground target
Later, the quantity for the pixel for including in foreground target can be counted.Target area can be by the picture that includes in foreground target
Vegetarian refreshments quantity is indicated, and can also use the pixel for including in the pixel quantity for including in foreground target and pending image
The ratio of quantity is indicated.In general, the pixel quantity for including in foreground target is more, corresponding region area is bigger.
It, may be from pending when carrying out target detection to pending image in embodiment provided by the present application
Multiple foreground targets are detected in image.When detecting two or more foreground targets, it is multiple that this can be obtained
The gross area shared by foreground target as target area, can also using the corresponding maximum area of foreground target as target area,
Herein without limiting.Target area is bigger, it is believed that the area that foreground target accounts for is bigger, correspondingly the area shared by background area
With regard to smaller, identify that the accuracy rate of image type is lower by background area in this way.
Pending image is a two-dimensional picture element matrix, then electronic equipment can establish one according to pending image
A two-dimensional coordinate system can indicate specific location of the pixel in pending image by two-dimensional coordinate.For example, can be to wait for
The pixel in the processing image most lower left corner is that coordinate origin establishes coordinate system, often moves up a pixel, the corresponding longitudinal axis
Coordinate value adds one.Often move right a pixel, and corresponding horizontal axis coordinate value adds one.Electronic equipment is detecting pending figure
As in foreground target when, the region shared by foreground target can be marked by a rectangle frame, pass through rectangle frame
The coordinates of four vertex correspondences the position of foreground target can be positioned.
Step 306, if target area be less than or equal to area threshold, in pending image in addition to foreground target
Background area is identified, and obtains image classification label.
Specifically, when target area is less than or equal to area threshold, it is believed that the area of foreground target is smaller, can be according to the back of the body
Scene area obtains image classification label.For example, when the target area of foreground target is less than or equal to 1/2 face of pending image
When product, so that it may be identified with the background area in pending image.When target area is less than or equal to area threshold, may be used also
It, can be with if clear background degree is less than or equal to the first clarity threshold further to obtain the clear background degree of background area
By the way that foreground target is identified, image classification label is obtained;It, can be with if clear background degree is more than the first clarity threshold
By obtaining image classification label to the identification of background area.In this way in shooting image, if if background is blurred, identification is accurate
True rate is not just high, so that it may to generate image classification label by foreground target.
Step 308, if target area is more than area threshold, foreground target is identified, image classification label is obtained.
When target area is more than area threshold, the accuracy rate that foreground target is identified is higher, and foreground target accounts for
It is more accurate to carry out image classification according to foreground target for most of area in pending image.Usually, electronic equipment meeting
Pre-set the classification type of foreground target, then by preset sorting algorithm recognition detection to foreground target particularly belong to
Which preset classification type.For example, electronic equipment can by foreground object classification people, doggie, kitten, cuisines, other etc.
Type, then can recognition detection to foreground target particularly belong to the above-mentioned type which kind of.In the application can with but not
Be limited to by RCNN (Regions with CNN Features), SSD (Single Shot MultiBox Detector),
YOLO (You Only Look Once) scheduling algorithm detects and identifies foreground target.
In one embodiment, the foreground target for including in pending image can be one or more.When pending figure
The foreground target for including as in can choose a foreground target there are when two or more from the foreground target for being included
It is identified, all foreground targets can also be identified.It, can be from pending when a foreground target is identified
A foreground target to be identified is chosen in foreground target included in image, it then can be according to the knowledge to foreground target to be identified
Other result generates image classification label.For example the maximum foreground target of area can be chosen as foreground target to be identified.
If multiple foreground targets are identified, then each foreground target can identify to obtain a foreground type.
It specifically, can be direct if judging to include only a kind of target of foreground type in pending image according to foreground classification result
Image classification label is generated according to the foreground type;If according to foreground classification result judge in pending image only comprising two kinds or
The foreground target of two or more foreground types can then calculate the gross area of the corresponding foreground target of each foreground type, root
Image classification label is generated according to the corresponding maximum foreground type of the gross area.For example, only comprising foreground type being " people " in image
Foreground target, then it is " Pic- people " that directly can generate image classification label according to foreground type " people ".If in image including mesh
A, target B and target C are marked, corresponding foreground type is respectively " people ", " cat " and " people ", then it is corresponding can to calculate separately " people "
The gross area S that target A and target C are accounted in the picture1, gross area S that " cat " corresponding target " B " accounts in the picture2.If S1>
S2, then image classification label will be generated according to foreground type " people ";If S1<S2, then image point will be generated according to foreground type " cat "
Class label.
In one embodiment, it after electronic equipment detects the foreground target in pending image, can calculate each
The corresponding target sharpness of foreground target.Target sharpness can react the clear journey of the textures such as the edge details of foreground target
Degree can reflect the importance of each foreground object, therefore can be carried out according to target sharpness to obtain to a certain extent
The foreground target of identification.For example, user shooting when, can by focal point on the object for comparing concern, and by other
Object carries out Fuzzy processing.When foreground target is identified, can only to the higher foreground target of clarity into
Row identification, the lower foreground target of clarity do not do recognition processing.
It may include several pixels in foreground target, then can be calculated by the gray scale difference of each pixel
Obtain the clarity of foreground target.General clarity is higher, and the gray scale difference between pixel is bigger;Clarity is lower, pixel
Between gray scale difference it is smaller.In one embodiment, can be specifically according to Brenner gradient method, Tenegrad gradient method,
The target sharpness that Laplace gradient method, variance method, energy gradient method scheduling algorithm calculate, but not limited to this.
Specifically, the foreground target that the second clarity threshold is more than to target sharpness is identified, and obtains image classification
Label.First clarity threshold and the second clarity threshold can be identical, can also be different, and not do restriction herein
One clarity threshold and the second clarity threshold can be preset changeless values, can also be dynamic change
Value, does not limit herein.Can also be that user is defeated for example, it may be prestoring a fixed value in the electronic device
Enter, carries out the value of dynamic regulation as needed, can also be the value that each target sharpness according to acquisition is calculated.
Fig. 4 is the displaying schematic diagram of the background area and foreground target in pending image in one embodiment.Such as Fig. 4 institutes
Show, the pending image in Fig. 4 be identified can detect in pending image to include background area 402 and foreground
Target 404 and foreground target 406.Obtained foreground target 404 and foreground target 406 are detected, can wait locating by rectangle frame
It is labeled in reason image.
Specifically, it can identify background area by disaggregated model, foreground target is identified by detection model.Electronics is set
It is standby before identifying background area and foreground target by disaggregated model and detection model, can to disaggregated model and detection model into
Row training, and a corresponding loss function is exported respectively.Loss function is the function for the confidence level that can assess classification results, is known
When other background area and foreground target, the corresponding confidence level of each pre-set categories can be exported respectively by loss function.
The higher classification of confidence level indicates that the probability that image is the category is bigger, thus judges that image is corresponding by confidence level
Background type and foreground type.
For example, the background of image is defined as the types such as seabeach, night scene, pyrotechnics, interior in advance, electronic equipment can be advance
Disaggregated model is trained, the disaggregated model after training can export a loss function.Pending image is input to instruction
In the disaggregated model perfected, so that it may to detect background area by disaggregated model, and identify the type of background area.Specifically
Ground can calculate each corresponding confidence level of default background type by loss function, background area is determined by confidence level
The corresponding background class result in domain.Such as the corresponding confidence level of the type of seabeach, night scene, pyrotechnics, interior being calculated etc. four
Respectively 0.01,0.06,0.89,0.04, then it can determine that the background area of pending image is the highest background type of confidence level.
Fig. 5 is the model schematic to identifying display foreground and background in one embodiment.As shown in figure 5, electronic equipment
Disaggregated model can be trained, image can be stamped class label before training pattern, and pass through image and corresponding class
Distinguishing label is trained disaggregated model.After disaggregated model trains, a first-loss function can be obtained.It was identifying
Cheng Zhong, can be by the background area in disaggregated model detection image, and the first-loss function by obtaining calculates each preset
Corresponding first confidence level of background type.The corresponding background class knot in background area can be determined according to the first obtained confidence level
Fruit.Electronic equipment can be trained detection model, the foreground target rectangle that can will include in image before training pattern
Frame is marked, and marks the corresponding classification of each foreground target.Detection model is trained by image.Detection model is instructed
After perfecting, second loss function can be obtained.In identification process, the foreground in detection model detection image can be passed through
Target, and export the position of each foreground target.Each default foreground type corresponding can be calculated by the second loss function
Two confidence levels.The corresponding foreground classification result of foreground target can be determined according to the second obtained confidence level.It is understood that
Above-mentioned disaggregated model and detection model can be two independent algorithm models, and disaggregated model can be Mobilenet algorithm moulds
Type, detection model can be SSD algorithm models, not limit herein.Disaggregated model and detection model can be serial, also may be used
To be parallel.
Fig. 6 is the model schematic that display foreground and background are identified in another embodiment.As shown in fig. 6, the identification mould
Type is a neural network model, and the input layer of the neural network receives the training image with image category label, passes through base
Plinth network (such as CNN networks) carries out feature extraction, and the characteristics of image of extraction is exported to characteristic layer, by this feature layer to background
Training objective carries out classification and detects to obtain first-loss function, and carrying out classification according to characteristics of image to foreground training objective detects
To the second loss function, position detection is carried out according to foreground area to foreground training objective and obtains position loss function, by first
Loss function, the second loss function and position loss function are weighted summation and obtain target loss function.The neural network can
For convolutional neural networks.Convolutional neural networks include data input layer, convolutional calculation layer, active coating, pond layer and full articulamentum.
Data input layer is for pre-processing raw image data.The pretreatment may include mean value, normalization, dimensionality reduction and albefaction
Processing.It refers to that each dimension of input data all centers are turned to 0 to go mean value, it is therefore an objective to which the center of sample is withdrawn into coordinate system original
Point on.Normalization is by amplitude normalization to same range.Albefaction refers to the amplitude normalization on each feature axis of data.
Convolutional calculation layer is used for local association and window sliding.The weight of each filter connection data window is fixed in convolutional calculation layer
, each filter pays close attention to a characteristics of image, such as vertical edge, horizontal edge, color, texture, these filters are closed
The feature extractor set of whole image is obtained together.One filter is a weight matrix.Pass through a weight matrix
Convolution can be done with data in different windows.Active coating is used to convolutional layer output result doing Nonlinear Mapping.What active coating used
Activation primitive can be ReLU (The Rectified Linear Unit correct linear unit).Pond layer could be sandwiched in continuous volume
Among lamination, it is used for the amount of compressed data and parameter, reduces over-fitting.Maximum value process or mean value method logarithm can be used in pond layer
According to dimensionality reduction.Full articulamentum is located at the tail portion of convolutional neural networks, and all neurons all have the right to reconnect between two layers.Convolutional Neural
A part of convolutional layer of network is cascaded to the first confidence level output node, and a part of convolutional layer is cascaded to the second confidence level output section
Point, a part of convolutional layer are cascaded to position output node, and the background of image can be detected according to the first confidence level output node
Type can detect the classification of the foreground target of image according to the second confidence level output node, can according to position output node
To detect the position corresponding to foreground target.
Specifically, above-mentioned disaggregated model and detection model can prestore in the electronic device, pending getting
When image, processing is identified to pending image by above-mentioned disaggregated model and detection model.It is understood that classification mould
Type and detection model can generally occupy the memory space of electronic equipment, and when handling great amount of images, to electricity
The storage capacity of sub- equipment requires also relatively high.When handling the pending image in terminal, terminal local can be passed through
The disaggregated model and detection model of storage are handled, and can also pending image be sent to server, by server
The disaggregated model and detection model of storage are handled.
Since the storage capacity of terminal is generally than relatively limited, so server can be by disaggregated model and detection model training
After good, trained disaggregated model and detection model are sent to terminal, terminal just no longer needs to be trained above-mentioned model.
The disaggregated model and detection model of terminal storage can be the models after overcompression simultaneously, and the model after compressing in this way accounts for
Resource will be smaller, but corresponding recognition accuracy is with regard to relatively low.Terminal can be handled pending as needed
Processing is identified in terminal local in the quantity decision of image, and processing is still identified on the server.Terminal is being got
After pending image, the amount of images of pending image is counted, it, will be pending if amount of images is more than default upload quantity
Image is uploaded to server, and carries out the processing of pending image on the server.After server process, handling result is sent
To terminal.
Fig. 7 is the schematic diagram that image classification label is generated in one embodiment.As shown in fig. 7, to image background regions into
Row identification, available image classification label includes landscape, seabeach, snow scenes, blue sky, greenery patches, night scene, dark, backlight, day
Go out/sunset, interior, pyrotechnics, spotlight etc..The foreground target of image is identified, available image classification label includes
Portrait, baby, cat, dog, cuisines etc..
Step 310, the amount of images for counting the corresponding pending image of each image classification label, according to amount of images from
Target image tag along sort is obtained in image classification label.
In one embodiment, the image classification label that amount of images is more than amount threshold can be obtained, as target image
Tag along sort;Or be ranked up image classification label according to amount of images, and obtained from the image classification label after sequence
Target image tag along sort.Specifically, the image classification label of specified digit can be obtained from the image classification label after sequence,
As target image tag along sort.For example, including 100 pending images in image collection, it is marked as " seabeach " and waits locating
Reason image has 52, and the pending image for being marked as " interior " has 34, and the pending image for being marked as " night scene " has 14.
It then can be using the maximum image classification label " seabeach " of amount of images as target image tag along sort.
It is understood that a pending image recognition, obtained image classification label can be one or more,
It does not limit herein.For example, including multiple foreground targets in pending image, then can be obtained according to this multiple foreground target more
A image classification label.When being detected to the background of pending image, multiple backgrounds can also be exported according to confidence level
Type, to obtain multiple images tag along sort according to this multiple background type.So when statistical picture quantity, including
The pending image of multiple images tag along sort can be repeated as many times as required statistics.
Step 312, it is corresponding with reference to user property to obtain each target image tag along sort, and is classified according to target image
The corresponding amount of images of label calculates each attribute weight with reference to user property.
In one embodiment, electronic equipment can be pre-set multiple with reference to user property, then establish image classification mark
The correspondence of label and reference user property.After getting target image classification, corresponding to for being pre-established according to this is closed
System's acquisition is corresponding to refer to user property, is then calculated according to the corresponding amount of images of target image tag along sort of statistics each
A attribute weight with reference to user property can determine user property according to attribute weight.
Step 314, using the maximum reference user property of attribute weight as target user's attribute.
After calculating each corresponding attribute weight with reference to user property according to amount of images, it can be determined according to attribute weight
Target user's attribute.The size of attribute weight indicates that user corresponds to the probability with reference to user property, and attribute weight is bigger, Yong Huwei
The corresponding possibility with reference to user property is bigger.It therefore, can be using the maximum reference user property of attribute weight as target user
Attribute.
In other embodiment provided by the present application, after the target user's attribute for recognizing user, it can be used according to target
Family attribute obtains Processing Algorithm, and is handled pending image according to Processing Algorithm.For example, identification user is women, then
The face processing of portrait U.S. can be carried out to image.Identification user is the elderly, and the saturation degree etc. of image can be improved.
In one embodiment, the method for computation attribute weight can also include specifically:
Step 802, the corresponding amount of images of each target image tag along sort is subjected to accumulation calculating and obtains image total amount.
Step 804, count each with reference to the corresponding amount of images of user property, as with reference to quantity.
Step 806, the ratio calculated with reference to quantity and image total amount is worth to reference to the corresponding attribute weight of user property.
The corresponding amount of images of each target image tag along sort is counted, and all target image tag along sorts are corresponding
Amount of images is added up to obtain image total amount.Then count each corresponding with reference to quantity with reference to user property, with reference to quantity
Accounting is bigger, it is believed that it is bigger that user corresponds to the possibility for referring to user property.It therefore, can be according to total with reference to quantity and image
The ratio of amount is worth to Attribute Weight value.Assuming that the image total amount that statistics obtains is T, it is T with reference to quantityn, then attribute weight Mn=Tn/
T。
For example, label 1, which corresponds to, refers to user property 1, label 2, which corresponds to, refers to user property 2, and label 3, which corresponds to, refers to user
Attribute 3.It is 36 that statistics, which obtains 1 corresponding amount of images of label, and 2 corresponding amount of images of label is 51,3 corresponding image of label
Quantity is 13.Each attribute weight with reference to user property can be then calculated according to the ratio of amount of images, with reference to user property 1,
It is respectively 0.36,0.51 and 0.13 with reference to user property 2 and with reference to 3 corresponding attribute weight of user property.
In other embodiment provided by the present application, the method for computation attribute weight can also include specifically:
Step 902, target image tag along sort is traversed, it is corresponding each with reference to user's category to obtain target image tag along sort
Property and it is each with reference to user property it is corresponding refer to ratio, with reference to ratio for indicate when recognizing image classification label
User corresponds to the probability with reference to user property.
Specifically, an image classification label can be arranged multiple with reference to user property, and refer to user property to each
It assigns one and refers to ratio.It can be used for indicating that user corresponds to this when recognizing image classification label with reference to ratio with reference to user to belong to
The probability of property.For example, label A corresponds to reference to user property 1 and refers to user property 2, corresponding reference ratio is respectively 0.8 He
0.2, then for explanation when the image classification label for recognizing image is label A, which has 0.8 probability to correspond to reference to user's category
Property 1, have 0.2 probability be with reference to user property 2.
Step 904, according to amount of images and with reference to each attribute weight with reference to user property of ratio calculation.
It obtains with reference to after ratio, it can be according to calculating each Attribute Weight with reference to user property with reference to ratio and amount of images
Weight.For example, the corresponding user property that refers to of label A includes attribute 1 and attribute 2, corresponding reference ratio is respectively S1a=0.80,
S2a=0.20;The corresponding user property that refers to of label B includes attribute 2 and attribute 3, and corresponding reference ratio is respectively S2b=
0.75 and S3b=0.25.It is a to count obtained 1 corresponding amount of images of label1=45,2 corresponding amount of images of label is a2
=55.It is so just M according to amount of images and with reference to the attribute weight of ratio calculation attribute 11=S1a*a1/(a1+a2)=0.36,
The attribute weight of attribute 2 is just M2=S2a*a1/(a1+a2)+S2b*a2/(a1+a2)=0.50, the attribute weight M of attribute 33=S3b*
a2/(a1+a2)=0.14.
The image processing method that above-described embodiment provides, can be identified pending image, obtain reflection image and clap
Take the photograph the image classification label of scene.According to the quantity of the corresponding pending image of image classification label, target image classification is obtained
Label can predict the association attributes of user according to target image tag along sort.It can be accurate according to target image tag along sort
Supposition user shooting custom, to predict the association attributes of user, improve the accuracy of image procossing.
Although should be understood that Fig. 2, Fig. 3, Fig. 8, Fig. 9 flow chart in each step according to arrow instruction according to
Secondary display, but these steps are not the inevitable sequence indicated according to arrow to be executed successively.Unless having herein explicitly
Bright, there is no stringent sequences to limit for the execution of these steps, these steps can execute in other order.Moreover, Fig. 2,
At least part step in Fig. 3, Fig. 8, Fig. 9 may include multiple sub-steps either these sub-steps of multiple stages or rank
Section is not necessarily to execute completion in synchronization, but can execute at different times, these sub-steps or stage
Execution sequence is also not necessarily and carries out successively, but can either the sub-step of other steps or stage be extremely with other steps
A few part executes in turn or alternately.
Figure 10 is the structural schematic diagram of image processing apparatus in one embodiment.As shown in Figure 10, the image processing apparatus
1000 include image collection module 1002, scene Recognition module 1004, quantity statistics module 1006 and attribute acquisition module 1008.
Wherein:
Image collection module 1002, for obtaining the image collection for including at least one pending image.
Scene Recognition module 1004, for traversing the pending image in described image set, to the pending image
It is identified to obtain image classification label.
Target labels determining module 1006, the picture number for counting the corresponding pending image of each image classification label
Amount, target image tag along sort is obtained according to described image quantity from described image tag along sort.
Attribute acquisition module 1008, for obtaining target user's attribute according to the target image tag along sort.
The image processing apparatus that above-described embodiment provides, can be identified pending image, obtain reflection image and clap
Take the photograph the image classification label of scene.According to the quantity of the corresponding pending image of image classification label, target image classification is obtained
Label can predict the association attributes of user according to target image tag along sort.It can be accurate according to target image tag along sort
Supposition user shooting custom, to predict the association attributes of user, improve the accuracy of image procossing.
In one embodiment, scene Recognition module 1004 is additionally operable to detect the foreground target in the pending image,
And obtain the corresponding target area of the foreground target;If the target area is less than or equal to area threshold, to pending
Background area in image in addition to the foreground target is identified, and obtains image classification label;If the target area is big
In area threshold, then the foreground target is identified, obtains image classification label.
In one embodiment, target labels determining module 1006 is additionally operable to obtain described image quantity more than amount threshold
Image classification label, as target image tag along sort;Or described image tag along sort is carried out according to described image quantity
Sequence, and target image tag along sort is obtained from the image classification label after sequence.
In one embodiment, target labels determining module 1006 is additionally operable to obtain from the image classification label after sequence
The image classification label of specified digit, as target image tag along sort.
In one embodiment, attribute acquisition module 1008 is for obtaining the corresponding reference of each target image tag along sort
User property, and each Attribute Weight with reference to user property is calculated according to the corresponding amount of images of the target image tag along sort
Weight;Using the maximum reference user property of the attribute weight as target user's attribute.
In one embodiment, attribute acquisition module 1008 is used for the corresponding picture number of each target image tag along sort
Amount carries out accumulation calculating and obtains image total amount;Count each with reference to the corresponding amount of images of user property, as with reference to quantity;Meter
It calculates the ratio with reference to quantity and image total amount and is worth to the corresponding attribute weight of the reference user property.
In one embodiment, attribute acquisition module 1008 is for traversing the target image tag along sort, described in acquisition
Target image tag along sort is corresponding each with reference to user property and each corresponding with reference to ratio, ginseng with reference to user property
Ratio is examined for indicating that user corresponds to the probability with reference to user property when recognizing image classification label;According to the figure
As quantity and refer to each attribute weight with reference to user property of ratio calculation.
The division of modules is only used for for example, in other embodiments, can will scheme in above-mentioned image processing apparatus
As processing unit is divided into different modules as required, to complete all or part of function of above-mentioned image processing apparatus.
The embodiment of the present application also provides a kind of computer readable storage mediums.One or more is executable comprising computer
The non-volatile computer readable storage medium storing program for executing of instruction, when the computer executable instructions are executed by one or more processors
When so that the processor executes the image processing method of above-described embodiment offer.
A kind of computer program product including instruction, when run on a computer so that computer executes above-mentioned
The image processing method that embodiment provides.
The embodiment of the present application also provides a kind of electronic equipment.Above-mentioned electronic equipment includes image processing circuit, at image
Managing circuit can utilize hardware and or software component to realize, it may include define ISP (Image Signal Processing, figure
As signal processing) the various processing units of pipeline.Figure 11 is the schematic diagram of image processing circuit in one embodiment.Such as Figure 11 institutes
Show, for purposes of illustration only, only showing the various aspects with the relevant image processing techniques of the embodiment of the present application.
As shown in figure 11, image processing circuit includes ISP processors 1140 and control logic device 1150.Imaging device 1110
The image data of capture is handled by ISP processors 1140 first, and ISP processors 1140 analyze image data can with capture
Image statistics for determining and/or imaging device 1110 one or more control parameters.Imaging device 1110 can wrap
Include the camera with one or more lens 1112 and imaging sensor 1114.Imaging sensor 1114 may include colour filter
Array (such as Bayer filters), imaging sensor 1114 can obtain the light captured with each imaging pixel of imaging sensor 1114
Intensity and wavelength information, and the one group of raw image data that can be handled by ISP processors 1140 is provided.1120 (such as top of sensor
Spiral shell instrument) parameter (such as stabilization parameter) of the image procossing of acquisition can be supplied to ISP processing based on 1120 interface type of sensor
Device 1140.1120 interface of sensor can utilize SMIA, and (Standard Mobile Imaging Architecture, standard are moved
Dynamic Imager Architecture) interface, other serial or parallel camera interfaces or above-mentioned interface combination.
In addition, raw image data can be also sent to sensor 1120 by imaging sensor 1114, sensor 1120 can base
It is supplied to ISP processors 1140 or sensor 1120 by original graph raw image data in 1120 interface type of sensor
As in data storage to video memory 1130.
ISP processors 1140 handle raw image data pixel by pixel in various formats.For example, each image pixel can
Bit depth with 8,10,12 or 14 bits, ISP processors 1140 can carry out raw image data at one or more images
Reason operation, statistical information of the collection about image data.Wherein, image processing operations can be by identical or different bit depth precision
It carries out.
ISP processors 1140 can also receive image data from video memory 1130.For example, 1120 interface of sensor will be former
Beginning image data is sent to video memory 1130, and the raw image data in video memory 1130 is available to ISP processing
Device 1140 is for processing.Video memory 1130 can be only in a part, storage device or electronic equipment for memory device
Vertical private memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
1114 interface of imaging sensor is come from when receiving or from 1120 interface of sensor or from video memory
When 1130 raw image data, ISP processors 1140 can carry out one or more image processing operations, such as time-domain filtering.Place
Image data after reason can be transmitted to video memory 1130, to carry out other processing before shown.ISP processors
1140 from video memory 1130 receive processing data, and to the processing data progress original domain in and RGB and YCbCr face
Image real time transfer in the colour space.Treated that image data may be output to display 1170 for ISP processors 1140, for
Family is watched and/or is further processed by graphics engine or GPU (Graphics Processing Unit, graphics processor).This
Outside, the output of ISP processors 1140 also can be transmitted to video memory 1130, and display 1170 can be from video memory 1130
Read image data.In one embodiment, video memory 1130 can be configured as realizing one or more frame buffers.This
Outside, the output of ISP processors 1140 can be transmitted to encoder/decoder 1160, so as to encoding/decoding image data.Coding
Image data can be saved, and be decompressed before being shown in 1170 equipment of display.Encoder/decoder 1160 can be by
CPU or GPU or coprocessor are realized.
The statistical data that ISP processors 1140 determine, which can be transmitted, gives control logic device Unit 1150.For example, statistical data can
It is passed including the images such as automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 1112 shadow correction of lens
1114 statistical information of sensor.Control logic device 1150 may include the processor for executing one or more routines (such as firmware) and/or
Microcontroller, one or more routines can be determined according to the statistical data of reception at control parameter and the ISP of imaging device 1110
Manage the control parameter of device 1140.For example, the control parameter of imaging device 1110 may include that 1120 control parameter of sensor (such as increases
Benefit, the time of integration of spectrum assignment, stabilization parameter etc.), camera flash control parameter, 1112 control parameter of lens it is (such as poly-
Burnt or zoom focal length) or these parameters combination.ISP control parameters may include for automatic white balance and color adjustment (example
Such as, RGB processing during) 1112 shadow correction parameter of gain level and color correction matrix and lens.
It is to realize image processing method that above-described embodiment provides with image processing techniques in Figure 11 below.
Used in this application may include to any reference of memory, storage, database or other media is non-volatile
And/or volatile memory.Suitable nonvolatile memory may include read-only memory (ROM), programming ROM (PROM),
Electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include arbitrary access
Memory (RAM), it is used as external cache.By way of illustration and not limitation, RAM is available in many forms, such as
It is static RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDR SDRAM), enhanced
SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM).
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
Cannot the limitation to the application the scope of the claims therefore be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, under the premise of not departing from the application design, various modifications and improvements can be made, these belong to the guarantor of the application
Protect range.Therefore, the protection domain of the application patent should be determined by the appended claims.
Claims (10)
1. a kind of image processing method, which is characterized in that the method includes:
Obtain the image collection for including at least one pending image;
The pending image in described image set is traversed, the pending image is identified to obtain image classification label;
The amount of images for counting the corresponding pending image of each image classification label, according to described image quantity from described image
Target image tag along sort is obtained in tag along sort;
Target user's attribute is obtained according to the target image tag along sort.
2. according to the method described in claim 1, it is characterized in that, described be identified to obtain image to the pending image
Tag along sort, including:
The foreground target in the pending image is detected, and obtains the corresponding target area of the foreground target;
If the target area is less than or equal to area threshold, to the background in pending image in addition to the foreground target
Region is identified, and obtains image classification label;
If the target area is more than area threshold, the foreground target is identified, image classification label is obtained.
3. according to the method described in claim 1, it is characterized in that, it is described according to described image quantity from described image contingency table
Target image tag along sort is obtained in label, including:
The image classification label that described image quantity is more than amount threshold is obtained, as target image tag along sort;Or
Described image tag along sort is ranked up according to described image quantity, and is obtained from the image classification label after sequence
Target image tag along sort.
4. according to the method described in claim 3, it is characterized in that, obtaining target in the image classification label from after sequence
Image classification label, including:
The image classification label for specifying digit is obtained from the image classification label after sequence, as target image tag along sort.
5. method according to claim 1 to 4, which is characterized in that described to be classified according to the target image
Label obtains target user's attribute, including:
It is corresponding with reference to user property to obtain each target image tag along sort, and is corresponded to according to the target image tag along sort
Amount of images calculate each attribute weight with reference to user property;
Using the maximum reference user property of the attribute weight as target user's attribute.
6. according to the method described in claim 5, it is characterized in that, described according to the corresponding picture number of target image tag along sort
Amount calculates each attribute weight with reference to user property, including:
The corresponding amount of images of each target image tag along sort is subjected to accumulation calculating and obtains image total amount;
Count each with reference to the corresponding amount of images of user property, as with reference to quantity;
It calculates the ratio with reference to quantity and image total amount and is worth to the corresponding attribute weight of the reference user property.
7. according to the method described in claim 5, it is characterized in that, described obtain the corresponding ginseng of each target image tag along sort
User property is examined, and each attribute with reference to user property is calculated according to the corresponding amount of images of the target image tag along sort
Weight, including:
Traverse the target image tag along sort, obtain the target image tag along sort it is corresponding it is each refer to user property,
And it is each corresponding with reference to ratio with reference to user property, with reference to ratio for indicating the user when recognizing image classification label
The corresponding probability with reference to user property;
According to described image quantity and refer to each attribute weight with reference to user property of ratio calculation.
8. a kind of image processing apparatus, which is characterized in that described device includes:
Image collection module, for obtaining the image collection for including at least one pending image;
Scene Recognition module is identified the pending image for traversing the pending image in described image set
Obtain image classification label;
Target labels determining module, the amount of images for counting the corresponding pending image of each image classification label, according to
Described image quantity obtains target image tag along sort from described image tag along sort;
Attribute acquisition module, for obtaining target user's attribute according to the target image tag along sort.
9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program quilt
The method as described in any one of claim 1 to 7 is realized when processor executes.
10. a kind of electronic equipment, including memory and processor, computer-readable instruction is stored in the memory, it is described
When instruction is executed by the processor so that the processor executes the method as described in any one of claim 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810587034.3A CN108764371A (en) | 2018-06-08 | 2018-06-08 | Image processing method, device, computer readable storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810587034.3A CN108764371A (en) | 2018-06-08 | 2018-06-08 | Image processing method, device, computer readable storage medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108764371A true CN108764371A (en) | 2018-11-06 |
Family
ID=64000620
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810587034.3A Pending CN108764371A (en) | 2018-06-08 | 2018-06-08 | Image processing method, device, computer readable storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108764371A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635805A (en) * | 2018-12-11 | 2019-04-16 | 上海智臻智能网络科技股份有限公司 | Image text location method and device, image text recognition methods and device |
CN111177434A (en) * | 2019-12-31 | 2020-05-19 | 北京容联易通信息技术有限公司 | Data backflow method for improving precision of cv algorithm |
CN111291688A (en) * | 2020-02-12 | 2020-06-16 | 咪咕文化科技有限公司 | Video tag obtaining method and device |
CN111709283A (en) * | 2020-05-07 | 2020-09-25 | 顺丰科技有限公司 | Method and device for detecting state of logistics piece |
CN112488012A (en) * | 2020-12-03 | 2021-03-12 | 浙江大华技术股份有限公司 | Pedestrian attribute identification method, electronic device and storage medium |
WO2024002394A3 (en) * | 2022-07-01 | 2024-02-22 | 顺丰科技有限公司 | Method and apparatus for measuring number of target objects, and electronic device and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103425685A (en) * | 2012-05-18 | 2013-12-04 | 京华时报社 | Method and device for having access to paper media |
CN103617432A (en) * | 2013-11-12 | 2014-03-05 | 华为技术有限公司 | Method and device for recognizing scenes |
CN104750848A (en) * | 2015-04-10 | 2015-07-01 | 腾讯科技(北京)有限公司 | Image file treating method, server and image display device |
CN105005593A (en) * | 2015-06-30 | 2015-10-28 | 北京奇艺世纪科技有限公司 | Scenario identification method and apparatus for multi-user shared device |
CN105809146A (en) * | 2016-03-28 | 2016-07-27 | 北京奇艺世纪科技有限公司 | Image scene recognition method and device |
CN107220852A (en) * | 2017-05-26 | 2017-09-29 | 北京小度信息科技有限公司 | Method, device and server for determining target recommended user |
CN107247786A (en) * | 2017-06-15 | 2017-10-13 | 北京小度信息科技有限公司 | Method, device and server for determining similar users |
CN107895041A (en) * | 2017-11-30 | 2018-04-10 | 北京小米移动软件有限公司 | Screening-mode method to set up, device and storage medium |
CN108021672A (en) * | 2017-12-06 | 2018-05-11 | 北京奇虎科技有限公司 | Social recommendation method, apparatus and computing device based on photograph album |
-
2018
- 2018-06-08 CN CN201810587034.3A patent/CN108764371A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103425685A (en) * | 2012-05-18 | 2013-12-04 | 京华时报社 | Method and device for having access to paper media |
CN103617432A (en) * | 2013-11-12 | 2014-03-05 | 华为技术有限公司 | Method and device for recognizing scenes |
CN104750848A (en) * | 2015-04-10 | 2015-07-01 | 腾讯科技(北京)有限公司 | Image file treating method, server and image display device |
CN105005593A (en) * | 2015-06-30 | 2015-10-28 | 北京奇艺世纪科技有限公司 | Scenario identification method and apparatus for multi-user shared device |
CN105809146A (en) * | 2016-03-28 | 2016-07-27 | 北京奇艺世纪科技有限公司 | Image scene recognition method and device |
CN107220852A (en) * | 2017-05-26 | 2017-09-29 | 北京小度信息科技有限公司 | Method, device and server for determining target recommended user |
CN107247786A (en) * | 2017-06-15 | 2017-10-13 | 北京小度信息科技有限公司 | Method, device and server for determining similar users |
CN107895041A (en) * | 2017-11-30 | 2018-04-10 | 北京小米移动软件有限公司 | Screening-mode method to set up, device and storage medium |
CN108021672A (en) * | 2017-12-06 | 2018-05-11 | 北京奇虎科技有限公司 | Social recommendation method, apparatus and computing device based on photograph album |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635805A (en) * | 2018-12-11 | 2019-04-16 | 上海智臻智能网络科技股份有限公司 | Image text location method and device, image text recognition methods and device |
CN109635805B (en) * | 2018-12-11 | 2022-01-11 | 上海智臻智能网络科技股份有限公司 | Image text positioning method and device and image text identification method and device |
CN111177434A (en) * | 2019-12-31 | 2020-05-19 | 北京容联易通信息技术有限公司 | Data backflow method for improving precision of cv algorithm |
CN111177434B (en) * | 2019-12-31 | 2023-09-05 | 北京容联易通信息技术有限公司 | Data reflow method for improving accuracy of cv algorithm |
CN111291688A (en) * | 2020-02-12 | 2020-06-16 | 咪咕文化科技有限公司 | Video tag obtaining method and device |
CN111291688B (en) * | 2020-02-12 | 2023-07-14 | 咪咕文化科技有限公司 | Video tag acquisition method and device |
CN111709283A (en) * | 2020-05-07 | 2020-09-25 | 顺丰科技有限公司 | Method and device for detecting state of logistics piece |
CN112488012A (en) * | 2020-12-03 | 2021-03-12 | 浙江大华技术股份有限公司 | Pedestrian attribute identification method, electronic device and storage medium |
WO2022116744A1 (en) * | 2020-12-03 | 2022-06-09 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for object recognition |
WO2024002394A3 (en) * | 2022-07-01 | 2024-02-22 | 顺丰科技有限公司 | Method and apparatus for measuring number of target objects, and electronic device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108960290A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN108764370B (en) | Image processing method, image processing device, computer-readable storage medium and computer equipment | |
CN108764208B (en) | Image processing method and device, storage medium and electronic equipment | |
US10896323B2 (en) | Method and device for image processing, computer readable storage medium, and electronic device | |
CN108764371A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN108764372B (en) | Construction method and device, mobile terminal, the readable storage medium storing program for executing of data set | |
CN108777815B (en) | Video processing method and device, electronic equipment and computer readable storage medium | |
CN109034078B (en) | Training method of age identification model, age identification method and related equipment | |
CN108984657B (en) | Image recommendation method and device, terminal and readable storage medium | |
WO2019233393A1 (en) | Image processing method and apparatus, storage medium, and electronic device | |
CN108875821A (en) | The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing | |
CN109063737A (en) | Image processing method, device, storage medium and mobile terminal | |
CN110580487A (en) | Neural network training method, neural network construction method, image processing method and device | |
CN108765033B (en) | Advertisement information pushing method and device, storage medium and electronic equipment | |
CN108961302B (en) | Image processing method, image processing device, mobile terminal and computer readable storage medium | |
CN108805103A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN108805198A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN108897786A (en) | Recommended method, device, storage medium and the mobile terminal of application program | |
CN110276767A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN108764321B (en) | Image-recognizing method and device, electronic equipment, storage medium | |
CN108810418A (en) | Image processing method, device, mobile terminal and computer readable storage medium | |
CN108804658B (en) | Image processing method and device, storage medium and electronic equipment | |
CN108717530A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN108875820A (en) | Information processing method and device, electronic equipment, computer readable storage medium | |
CN108959462A (en) | Image processing method and device, electronic equipment, computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181106 |
|
RJ01 | Rejection of invention patent application after publication |