CN108875820A - Information processing method and device, electronic equipment, computer readable storage medium - Google Patents
Information processing method and device, electronic equipment, computer readable storage medium Download PDFInfo
- Publication number
- CN108875820A CN108875820A CN201810585579.0A CN201810585579A CN108875820A CN 108875820 A CN108875820 A CN 108875820A CN 201810585579 A CN201810585579 A CN 201810585579A CN 108875820 A CN108875820 A CN 108875820A
- Authority
- CN
- China
- Prior art keywords
- image
- along sort
- tag
- tag along
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/55—Push-based network services
Abstract
This application involves a kind of information processing methods and device, electronic equipment and computer readable storage medium.The above method includes:Scene Recognition is carried out to image, the tag along sort of image is obtained, counts the tag along sort of each image, according to the statistical number of tag along sort determine user identifier corresponding to user tag, push associated with user tag information to user identifier.Since user tag can be identified and determined to the image of user, to user's push and the associated information of user tag, the accuracy of information recommendation can be improved.
Description
Technical field
This application involves field of computer technology, more particularly to a kind of information processing method and device, electronic equipment, meter
Calculation machine readable storage medium storing program for executing.
Background technique
With the development of computer technology, internet has become the important sources that people obtain information.Due to internet
Scale makes information resources more and more abundant, and people are needed artificially when obtaining required information from a large amount of data information
It scans for, and then information recommendation technology occurs, can be recorded according to the information or browsing that user uploads, recommend phase to user
Associated information.
However, there is information recommendation inaccuracy in traditional method.
Summary of the invention
The embodiment of the present application provides a kind of information processing method and device, electronic equipment, computer readable storage medium, can
To improve the accuracy of information recommendation.
A kind of information processing method, including:
Scene Recognition is carried out to image, obtains the tag along sort of described image;
The tag along sort for counting each image, according to the statistical number of tag along sort determine user identifier corresponding to user mark
Label;
Information associated with the user tag is pushed to the user identifier.
A kind of information processing unit, including:
Scene Recognition module obtains the tag along sort of described image for carrying out scene Recognition to image;
Label determining module determines user according to the statistical number of tag along sort for counting the tag along sort of each image
The corresponding user tag of mark;
Pushing module, for pushing information associated with the user tag to the user identifier.
A kind of electronic equipment, including memory and processor store computer program, the calculating in the memory
When machine program is executed by the processor, so that the processor executes following steps:
Scene Recognition is carried out to image, obtains the tag along sort of described image;
The tag along sort for counting each image, according to the statistical number of tag along sort determine user identifier corresponding to user mark
Label;
Information associated with the user tag is pushed to the user identifier.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
Following steps are realized when row:
Scene Recognition is carried out to image, obtains the tag along sort of described image;
The tag along sort for counting each image, according to the statistical number of tag along sort determine user identifier corresponding to user mark
Label;
Information associated with the user tag is pushed to the user identifier.
Above- mentioned information treating method and apparatus, electronic equipment, computer readable storage medium can carry out scene to image
Identification, obtains the tag along sort of image, counts the tag along sort of each image, determines that user marks according to the statistical number of tag along sort
Know corresponding user tag, pushes information associated with the user tag to the user identifier.Due to can to
The image at family carries out identifying and determining user tag, to user's push and the associated information of user tag, information can be improved and push away
The accuracy recommended.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is the schematic diagram of internal structure of electronic equipment in one embodiment;
Fig. 2 is the flow chart of information processing method in one embodiment;
Fig. 3 is the flow chart of image scene identification in one embodiment;
Fig. 4 is the configuration diagram of neural network in one embodiment;
Fig. 5 is the flow chart in one embodiment to user's pushed information;
Fig. 6 is the structural block diagram of information processing unit in one embodiment;
Fig. 7 is the schematic diagram of information-processing circuit in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and
It is not used in restriction the application.
Fig. 1 is the schematic diagram of internal structure of electronic equipment in one embodiment.As shown in Figure 1, the electronic equipment includes logical
Cross processor, memory and the network interface of system bus connection.Wherein, which is used to provide calculating and control ability,
Support the operation of entire electronic equipment.Memory for storing data, program etc., at least one computer journey is stored on memory
Sequence, the computer program can be executed by processor, to realize that is provided in the embodiment of the present application is suitable for the wireless of electronic equipment
Network communication method.Memory may include non-volatile memory medium and built-in storage.Non-volatile memory medium is stored with behaviour
Make system and computer program.The computer program can be performed by processor, to be mentioned for realizing following each embodiment
A kind of information processing method supplied.Built-in storage provides high speed for the operating system computer program in non-volatile memory medium
The running environment of caching.Network interface can be Ethernet card or wireless network card etc., for being led to external electronic equipment
Letter.The electronic equipment can be mobile phone, tablet computer or personal digital assistant or wearable device etc..
Fig. 2 is the flow chart of information processing method in one embodiment.Information processing method in the present embodiment, with operation
It is described on electronic equipment in Fig. 1.As shown in Fig. 2, information processing method includes step 202 to step 206.
Step 202, scene Recognition is carried out to image, obtains the tag along sort of image.
Image refers to the image that electronic equipment is acquired by camera.In one embodiment, image is also possible to store
Image in electronic equipment local can also be the image etc. that electronic equipment is downloaded from network.Specifically, scene is carried out to image
Identification, can be according to VGG (Visual Geometry Group, visual geometric group), CNN (Convolutional Neural
Network, convolutional neural networks), decision tree (Decision Tree), random forest (Random forest) even depth study
Algorithm Training scene identification model carries out scene Recognition to image according to scene Recognition model.Scene Recognition model generally comprises
Input layer, hidden layer and output layer;Input layer is used to receive the input of image;Hidden layer is for handling the image received;
It is the scene Recognition result for exporting image that output layer, which is used to export to the final result of image procossing,.
The scene of image can be landscape, seabeach, blue sky, greenweed, snow scenes, pyrotechnics, spotlight, text, portrait, baby,
Cat, dog, cuisines etc..The tag along sort of image refers to the scene classification label of image.It specifically, can be by the scene Recognition of image
As a result as the tag along sort of image.For example, then the tag along sort of image is indigo plant when the scene Recognition result of image is blue sky
It.Electronic equipment can carry out scene Recognition according to image of the scene Recognition model to electronic equipment, and according to scene Recognition knot
Fruit determines the tag along sort of image.
Step 204, the tag along sort for counting each image determines corresponding to user identifier according to the statistical number of tag along sort
User tag.
User identifier can be one or more combinations in number, letter, character.Specifically, user identifier is electronic equipment
The identity of holder.User tag, which refers to, is marked user identifier according to the tag along sort of image.Tag along sort
Statistical number refers to the amount of images in the tag along sort of all images containing the tag along sort.Electronic equipment is according to tag along sort
Statistical number determines user tag corresponding to user identifier, and specifically, electronic equipment can be by the biggish tag along sort of statistical number
As user tag corresponding to user identifier;The corresponding contingency table of different user label can also be stored in advance in electronic equipment
Label, using the corresponding user tag of the biggish tag along sort of statistical number as user tag corresponding to user identifier.For example, outdoor
Image classification label in user tag can be seabeach, greenweed, landscape etc., then when the biggish tag along sort of statistical number is seabeach
When with landscape, user tag corresponding to user identifier can be outdoor subscriber label, be also possible to seabeach user tag and wind
Scape user tag.
Electronic equipment can count the tag along sort of each image, obtain the statistical number of different classifications label, root
User tag corresponding to user identifier is determined according to the statistical number of tag along sort.Specifically, electronic equipment can be according to statistical number
Biggish tag along sort determines tag along sort corresponding to user identifier.The biggish tag along sort of statistical number can be 1,2,
3,4 etc. without being limited thereto.
Step 206, information associated with user tag is pushed to user identifier.
Specifically, information can be text information, image information, audio-frequency information, video information, network linking etc..With with
Label associated information in family can be the information in the information content comprising user tag, be also possible to information labels for user's mark
The information of label.For example, information associated with user tag can be the information that label is cuisines when user tag is cuisines
Such as cuisines production video, be also possible to buy network linking etc. of cuisines.Electronic equipment is pushed to user identifier and user
The associated information of label, can be when electronic equipment shows image to user identifier pushed information, can also be in electronic equipment
It can also be to user identifier pushed information in the state of browsing webpage in electronic equipment to user identifier pushed information when bright screen
Deng without being limited thereto.
In embodiment provided by the present application, by carrying out scene Recognition to image according to scene Recognition model, image is obtained
Tag along sort, count the tag along sort of each image, according to the statistical number of tag along sort determine user identifier corresponding to use
Family label pushes information associated with user tag to user identifier, the accuracy of information recommendation can be improved.
In one embodiment, the tag along sort of image includes scene classification label and target in above- mentioned information processing method
Tag along sort.
Scene classification label is obtained to the scene Recognition result of image background regions.Target classification label is to image
What the object detection results of foreground area obtained.Specifically, the scene of image background regions can be landscape, seabeach, blue sky, green
Grass, snow scenes, night scene, dark, backlight, sunset, pyrotechnics, spotlight, interior etc., electronic equipment can be according to image background regions
Scene Recognition result determine the scene classification label of image.The scene in display foreground region can be personage, baby, cat and dog,
Cuisines, text, microspur etc., electronic equipment can determine the target point of image according to the object detection results to display foreground region
Class label.
Electronic equipment can background area to image and foreground area identify, obtain the scene classification label of image
The accuracy of image detection can be improved in tag along sort with target detection label as image, thus according to the classification of image
The accuracy of information recommendation can be improved to user identifier pushed information in label.
In one embodiment, step 204 in above- mentioned information processing method carries out scene Recognition to image, obtains image
Tag along sort, as shown in figure 3, including:
Step 302, scene Recognition is carried out to image, obtains the scene classification label of image.
Electronic equipment can carry out scene Recognition to image, obtain the scene classification label of image.Specifically, electronic equipment
Scene Recognition can be carried out to image using Image Classfication Technology.It is corresponding that electronic equipment can prestore multiple scene classification labels
Image feature information carries out the image feature information needed in the image for carrying out scene Recognition with the image feature information prestored
Matching, obtains scene classification label of the corresponding scene classification label of image feature information of successful match as image.When
When having multiple with the corresponding scene classification label of successful image feature information, each scene classification label can be obtained respectively
Confidence level chooses scene classification mark of the scene classification label as image according to the confidence level of each scene classification label
Label.The scene classification label prestored in electronic equipment may include:It is landscape, seabeach, blue sky, greenweed, snow scenes, night scene, dark, inverse
Light, sunset, pyrotechnics, spotlight, interior, microspur, text, portrait, baby, cat, dog, cuisines etc..
Step 304, target detection is carried out to image, obtains the target classification label of image.
Electronic equipment carries out target detection to image, can be by image feature information in image and the target classification prestored label
Corresponding characteristic information is matched, and target of the corresponding target classification label of characteristic information of successful match as image is obtained
Tag along sort.The target classification label prestored in above-mentioned electronic equipment may include:Portrait, baby, cat, dog, cuisines, text, indigo plant
It, greenweed, sandy beach, pyrotechnics etc..
Step 306, using scene classification label and target classification label as the tag along sort of image.
Scene classification label and target classification label can be used as the tag along sort of image by electronic equipment, can also root
According to the size of image background regions and foreground area, using the corresponding tag along sort of large area as the tag along sort of image.?
In one embodiment, image can be identical as the scene Recognition result of background area without foreground area or foreground area, then electric
Sub- equipment can determine the tag along sort of image according to the scene Recognition result of image background regions.For example, in a shooting
Hold only in the image on meadow, scene classification label and target classification label are all greenweed, then the tag along sort of image is greenweed;
If there is other objects such as cat on meadow, the tag along sort of image is greenweed and cat.
In one embodiment, scene Recognition is carried out to image in above- mentioned information processing method, obtains the contingency table of image
The step of label can also include:Scene classification and target detection are carried out to image, obtain the scene classification label and target of image
Tag along sort, using scene classification label and target classification label as the tag along sort of image.
Electronic equipment can train the neural network that scene classification and target detection can be achieved at the same time.Specifically, in nerve
It can will include that the training image of at least one background training objective and prospect training objective is input in network training process
In neural network, neural network carries out feature extraction according to background training objective and prospect training objective, to background training objective
It is detected to obtain the first forecast confidence, first-loss letter is obtained according to the first forecast confidence and the first true confidence level
Number, detects prospect training objective to obtain the second forecast confidence, according to the second forecast confidence and the second true confidence
Degree obtains the second loss function, target loss function is obtained according to first-loss function and the second loss function, to neural network
Parameter be adjusted so that training neural network it is subsequent can identify scene classification and target classification simultaneously, to obtain
The neural network that the foreground area of image and background area can be detected simultaneously.Confidence level is to be measured the measurement of parameter
The credibility of value.The first true confidence level indicates to specify image belonging to the background image marked in advance in the training image
The confidence level of classification.Second true confidence level indicates to specify target class belonging to the foreground target marked in advance in the training image
Other confidence level.
In one embodiment, above-mentioned neural network include at least one input layer, facilities network network layers, sorter network layer,
Target detection network layer and two output layers, two output layers include with cascade first output layer of the sorter network layer and with
Cascade second output layer of the target detection network layer;Wherein, in the training stage, which is used to receive the training image,
First output layer is used to export the first prediction confidence of the affiliated given scenario classification of background image of sorter network layer detection
Degree;Second output layer is used to export belonging to the default boundary frame of each pre-selection of target detection network layer detection relative to finger
Set the goal the offset parameter of corresponding real border frame and the second forecast confidence of affiliated specified target category.Fig. 4 is
The configuration diagram of neural network in one embodiment.As shown in figure 4, the input layer of neural network, which receives, has image category mark
The training image of label carries out feature extraction by basic network (such as VGG network), and the characteristics of image of extraction is exported to feature
Layer carries out classification to image by this feature layer and detects to obtain first-loss function, carries out mesh according to characteristics of image to foreground target
Mark detection obtains the second loss function, carries out position detection according to foreground target to foreground target and obtains position loss function, will
First-loss function, the second loss function and position loss function are weighted summation and obtain target loss function.Neural network
Including data input layer, facilities network network layers, sorter network layer, target detection network layer and two output layers.Data input layer is used
In reception raw image data.Facilities network network layers carry out pretreatment and feature extraction to the image that input layer inputs.The pre- place
Reason may include mean value, normalization, dimensionality reduction and whitening processing.It goes mean value to refer to and each dimension of input data all centers is turned to 0,
Purpose is that the center of sample is withdrawn on coordinate origin.Normalization is by amplitude normalization to same range.Albefaction is
Refer to the amplitude normalization on each feature axis of data.Image data carries out feature extraction, such as preceding 5 layers of volume using VGG16
Lamination carries out feature extraction to original image, then the feature of extraction is input to sorter network layer and target detection network layer.?
The depth convolution such as Mobilenet network can be used in sorter network layer, point convolution detects feature, is then input to output
Layer obtains the first forecast confidence that image category is specified belonging to image scene classification, then according to the first forecast confidence and the
One true confidence level asks difference to obtain first-loss function;It can be used in target detection network layer such as SSD network, 5 before VGG16
Concatenated convolutional characteristic layer after the convolutional layer of layer predicts specified target category institute using one group of convolution filter in convolution characteristic layer
Corresponding pre-selection default boundary frame is pre- relative to corresponding to the offset parameter of real border frame and specified target category second
Survey confidence level.Area-of-interest is the region for preselecting default boundary frame.Position loss function is constructed according to offset parameter, according to
The difference of second forecast confidence and the second true confidence level obtains the second loss function.By first-loss function, the second loss
Function and position loss function weighted sum obtain target loss function, use back-propagation algorithm tune according to target loss function
The parameter of whole neural network, is trained neural network.
When being identified using trained neural network to image, neural network input layer receives the image of input, mentions
The feature for taking image is input to sorter network layer and carries out image scene identification, defeated by softmax classifier in the first output layer
The confidence level of each given scenario classification belonging to background image out chooses confidence level highest and is more than the picture field of confidence threshold value
Scene classification label belonging to background image of the scape as the image.The feature of the image of extraction is input to target detection network
Layer carries out foreground target detection, is exported in the second output layer by softmax classifier and specifies target category belonging to foreground target
Confidence level and corresponding position, choose confidence level highest and be more than confidence threshold value target category as prospect in the image
Target classification label belonging to target, and export the corresponding position of target classification label.
Scene Recognition is carried out to image by neural network, feature is carried out to image using the facilities network network layers of neural network
It extracts, the characteristics of image of extraction is input to sorter network and target detection network layer, scene classification is carried out by sorter network
The confidence level that image category is specified belonging to detection output image background regions, carries out target detection by target detection network layer and obtains
To the confidence level for specifying target category belonging to foreground area, using the highest image category of confidence level and target category as figure
The scene classification label and image classification label of picture, so as to determine the scene classification label and target classification mark of image simultaneously
Label.
In one embodiment, the tag along sort for counting each image determines that user marks according to the statistical number of tag along sort
Know corresponding user tag, including:The tag along sort for counting each image in preset time, according to contingency table in preset time
The statistical number of label determines user tag corresponding to user identifier.
Image in preset time refers to the image of acquisition time or acquisition time within a preset time.Specifically, it presets
Time can be the electronic equipment pre-set time, be also possible to what user was set according to specific needs.Preset time
Can be 1 day, 5 days, 10 days etc. it is without being limited thereto.Electronic equipment can count the tag along sort of each image in preset time, root
User tag corresponding to user identifier is determined according to the statistical number of tag along sort in preset time.For example, electronic equipment can unite
The tag along sort of each image in meter 24 hours, determines corresponding to user identifier according to the statistical number of tag along sort in 24 hours
User tag pushes information associated with user tag to user identifier.
In one embodiment, electronic equipment can also be more than at a distance from current time at the upper primary image recognition moment
Default identification interval, carries out scene Recognition to unidentified image, obtains the tag along sort of image, count image in preset time
Tag along sort, according in preset time image tag along sort statistical number update user identifier corresponding to user tag, to
User identifier pushes information associated with user tag.
In one embodiment, electronic equipment can also be when unidentified amount of images reaches preset quantity, to not knowing
Other image carries out scene Recognition, obtains the tag along sort of image, the tag along sort of the image of this identification is counted, according to this
The statistical number of the image classification label of identification determines new user tag corresponding to user identifier, by new user tag update to
In the corresponding user tag of family mark.
Electronic equipment determines user corresponding to user identifier according to the statistical number of the image classification label in preset time
Label can impact the determination of user tag to avoid the statistical number of the tag along sort of the image outside excessive preset time,
Timing updates user tag corresponding to user identifier, the accuracy of user tag is improved, to improve to user's recommendation information
Accuracy.
In one embodiment, above- mentioned information processing method further includes:Using the highest tag along sort of statistical number as user
The corresponding user tag of mark pushes information associated with user tag to user identifier.
Specifically, electronic equipment can count the tag along sort of each image, using the highest tag along sort of statistical number as
User tag corresponding to user identifier pushes information associated with user tag to user identifier.For example, working as electronic equipment
The statistical number of middle baby's tag along sort is 25, the statistical number of portrait tag along sort is 10, and the statistical number of cuisines tag along sort is 5
When, then can using the highest baby's tag along sort of statistical number as user tag corresponding to user identifier, thus, electronic equipment
The music or view that the information such as article of Baby Care method associated with baby can be pushed to user identifier, be suitble to baby
Frequently, infant supplemental food purchase link etc..
By being pushed to user identifier using the highest tag along sort of statistical number as user tag corresponding to user identifier
Information associated with user tag, can accurately determine the user tag of user identifier, improve the accuracy of information recommendation.
As shown in figure 5, in one embodiment, the information processing method provided further includes step 502 to step 504.Its
In:
Step 502, using the ratio of the statistical number of tag along sort and the statistical number of all tag along sorts as tag along sort
Weighted value.
Step 504, information associated with tag along sort is pushed to user identifier according to weighted value.
Electronic equipment pushes information associated with tag along sort to user identifier according to the weighted value of tag along sort.Specifically
It is more to push information associated with the tag along sort to user identifier when the weighted value of tag along sort is higher for ground;Work as classification
When the weighted value of label is lower, it is fewer that information associated with the tag along sort is pushed to user identifier.For example, working as electronic equipment
The statistical number of middle baby's tag along sort is 25, the statistical number of portrait tag along sort is 15, and the statistical number of cuisines tag along sort is 10
When, then the weighted value of tag along sort is baby 0.5, portrait 0.3, cuisines 0.2 respectively, then pushed to user identifier with baby's phase
Associated information is the 50% of all information, information associated with portrait is the 30% of all information, associated with cuisines
Information is the 20% of all information.
Electronic equipment pushes information associated with tag along sort to user identifier according to the weighted value of tag along sort, can be with
Increase the richness to user's recommendation information, while push is carried out according to weighted value, the accuracy of information recommendation can be improved.
In one embodiment, a kind of information processing method is provided, this method is realized specific step is as follows and is described:
Firstly, electronic equipment carries out scene Recognition to image, the tag along sort of image is obtained.The tag along sort of image can be with
It is landscape, seabeach, blue sky, greenweed, snow scenes, pyrotechnics, spotlight, text, portrait, baby, cat, dog, cuisines etc..Electronic equipment can
With previously according to VGG, CNN even depth learning algorithm Training scene identification model, according to scene Recognition model to electronic equipment
Image carries out scene Recognition, and the tag along sort of image is determined according to scene Recognition result.
Optionally, the tag along sort of image includes scene classification label and target classification label.Scene classification label is pair
What the scene Recognition result of image background regions obtained.Target classification label is obtained to the object detection results in display foreground region
It arrives.Electronic equipment can background area to image and foreground area identify, obtain image scene classification label and
Tag along sort of the target detection label as image.
Optionally, electronic equipment carries out scene Recognition to image, obtains the scene classification label of image, carries out mesh to image
Mark detection, obtains the target classification label of image, using scene classification label and target classification label as the tag along sort of image.
Electronic equipment can prestore the corresponding image feature information of multiple scene classification labels, will be in the image that need to carry out scene Recognition
Image feature information matched with the image feature information prestored, obtain the corresponding field of image feature information of successful match
Scene classification label of the scape tag along sort as image.Electronic equipment carries out target detection to image, can be special by image in image
Reference ceases characteristic information corresponding with the target classification label prestored and is matched, and the characteristic information for obtaining successful match is corresponding
Target classification label of the target classification label as image.Electronic equipment can be equal by scene classification label and target classification label
It, can also be according to the size of image background regions and foreground area, by corresponding point of large area as the tag along sort of image
Tag along sort of the class label as image.
Optionally, electronic equipment carries out scene classification and target detection to image, obtain image scene classification label and
Target classification label, using scene classification label and target classification label as the tag along sort of image.Electronic equipment can train
The neural network that scene classification and target detection can be achieved at the same time carries out feature to image using the facilities network network layers of neural network
It extracts, the characteristics of image of extraction is input to sorter network and target detection network layer, scene classification is carried out by sorter network
The confidence level that image category is specified belonging to detection output image background regions, carries out target detection by target detection network layer and obtains
To the confidence level for specifying target category belonging to foreground area, so that it is determined that the scene classification label and target classification label of image.
Then, electronic equipment counts the tag along sort of each image, determines user identifier according to the statistical number of tag along sort
Corresponding user tag.Electronic equipment can count the tag along sort of each image, obtain different classifications label
Statistical number determines user tag corresponding to user identifier according to the statistical number of tag along sort.Specifically, electronic equipment can be with
Using the biggish tag along sort of statistical number as user tag corresponding to user identifier;Difference can also be stored in advance in electronic equipment
The corresponding tag along sort of user tag, using the corresponding user tag of the biggish tag along sort of statistical number as corresponding to user identifier
User tag.
Optionally, electronic equipment counts the tag along sort of each image in preset time, according to contingency table in preset time
The statistical number of label determines user tag corresponding to user identifier.Image in preset time refers to acquisition time or acquisition time
Image within a preset range.Electronic equipment can count the tag along sort of each image in preset time, according to preset time
The statistical number of interior tag along sort determines user tag corresponding to user identifier.
Optionally, electronic equipment can also be more than default identification at a distance from current time at the upper primary image recognition moment
Interval carries out scene Recognition to unidentified image, obtains the tag along sort of image, counts the contingency table of image in preset time
Label, according to user tag corresponding to the tag along sort statistical number update user identifier of image in preset time.
Optionally, electronic equipment can also be when unidentified amount of images reaches preset quantity, to unidentified image
Scene Recognition is carried out, the tag along sort of image is obtained, counts the tag along sort of the image of this identification, the figure identified according to this
As the statistical number of tag along sort determines new user tag corresponding to user identifier, new user tag is updated to user identifier institute
In corresponding user tag.
Then, electronic equipment pushes information associated with user tag to user identifier.It is associated with user tag
Information can be the information in the information content comprising user tag, be also possible to the information that information labels are user tag.Electronics
Equipment pushes information associated with user tag to user identifier, can push away when electronic equipment shows image to user identifier
It delivers letters breath, can also be in the bright screen of electronic equipment to user identifier pushed information, it can also be in electronic equipment in browsing webpage
State be to user identifier pushed information etc., it is without being limited thereto.
Optionally, using the highest tag along sort of statistical number as user tag corresponding to user identifier, to user identifier
Push information associated with user tag.By using the highest tag along sort of statistical number as user corresponding to user identifier
Label pushes information associated with user tag to user, can accurately determine the user tag of user identifier, improves letter
Cease the accuracy recommended.
Optionally, electronic equipment is using the ratio of the statistical number of tag along sort and the statistical number of all tag along sorts as classification
The weighted value of label pushes information associated with tag along sort to user identifier according to weighted value.Specifically, work as tag along sort
Weighted value it is higher when, it is more to push associated with tag along sort information to user identifier;When the weighted value of tag along sort
When lower, it is fewer that information associated with the tag along sort is pushed to user identifier.Electronic equipment is according to the weight of tag along sort
It is worth to user identifier and pushes information associated with tag along sort, the richness to user's recommendation information, while root can be increased
Carrying out push according to weighted value can be improved the accuracy of information recommendation.
It should be understood that although each step in the flow chart of Fig. 2,3,5 is successively shown according to the instruction of arrow,
It is these steps is not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps
There is no stringent sequences to limit for rapid execution, these steps can execute in other order.Moreover, in Fig. 2,3,5 extremely
Few a part of step may include that perhaps these sub-steps of multiple stages or stage are not necessarily same to multiple sub-steps
Moment executes completion, but can execute at different times, and the execution sequence in these sub-steps or stage is also not necessarily
It successively carries out, but in turn or can be handed over at least part of the sub-step or stage of other steps or other steps
Alternately execute.
Fig. 6 is the structural block diagram of one embodiment information processing unit.As shown in fig. 6, a kind of information processing unit, including
Scene Recognition module 602, label determining module 604 and pushing module 606.Wherein:
Scene Recognition module 602 obtains the tag along sort of image for carrying out scene Recognition to image.
Label determining module 604 is determined according to the statistical number of tag along sort and is used for counting the tag along sort of each image
The corresponding user tag of family mark.
Pushing module 606, for pushing information associated with user tag to user identifier.
In one embodiment, scene Recognition module 602 can be also used for carrying out scene Recognition to image, obtain image
Scene classification label and target classification label.
In one embodiment, scene Recognition module 602 can be also used for carrying out scene Recognition to image, obtain image
Scene classification label carries out target detection to image, obtains the target classification label of image, by scene classification label and target point
Tag along sort of the class label as image.
In one embodiment, scene Recognition module 602 can be also used for carrying out scene classification and target detection to image,
The scene classification label and target classification label of image are obtained, using scene classification label and target classification label as point of image
Class label.
In one embodiment, label determining module 604 can be also used for the classification of each image in statistics preset time
Label, according to the statistical number of tag along sort in the preset time determine the user identifier corresponding to user tag.
In one embodiment, label determining module 604 can be also used for using the highest tag along sort of statistical number as institute
User tag corresponding to user identifier is stated, pushing module 606 can be also used for pushing and the user to the user identifier
The associated information of label.
In one embodiment, pushing module 606 can be also used for the statistical number of tag along sort and all tag along sorts
Statistical number weighted value of the ratio as tag along sort, pushed according to weighted value to user identifier associated with tag along sort
Information.
The division of modules is only used for for example, in other embodiments, can will believe in above- mentioned information processing unit
Breath processing unit is divided into different modules as required, to complete all or part of function of above- mentioned information processing unit.
Specific about information processing unit limits the restriction that may refer to above for information processing method, herein not
It repeats again.Modules in above- mentioned information processing unit can be realized fully or partially through software, hardware and combinations thereof.On
Stating each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also store in a software form
In memory in computer equipment, the corresponding operation of the above modules is executed in order to which processor calls.
Realizing for the modules in information processing unit provided in the embodiment of the present application can be the shape of computer program
Formula.The computer program can be run in terminal or server.The program module that the computer program is constituted is storable in terminal
Or on the memory of server.When the computer program is executed by processor, method described in the embodiment of the present application is realized
Step.
The embodiment of the present application also provides a kind of computer readable storage mediums.One or more is executable comprising computer
The non-volatile computer readable storage medium storing program for executing of instruction, when the computer executable instructions are executed by one or more processors
When, so that the step of processor execution information processing method.
A kind of computer program product comprising instruction, when run on a computer, so that computer execution information
Processing method.
The embodiment of the present application also provides a kind of electronic equipment.It include image processing circuit in above-mentioned electronic equipment, at image
Reason circuit can use hardware and or software component realization, it may include define ISP (Image Signal Processing, figure
As signal processing) the various processing units of pipeline.Fig. 7 is the schematic diagram of image processing circuit in one embodiment.Such as Fig. 7 institute
Show, for purposes of illustration only, only showing the various aspects of image processing techniques relevant to the embodiment of the present application.
As shown in fig. 7, image processing circuit includes ISP processor 740 and control logic device 750.Imaging device 710 captures
Image data handled first by ISP processor 740, ISP processor 740 to image data analyzed with capture can be used for really
The image statistics of fixed and/or imaging device 710 one or more control parameters.Imaging device 710 may include having one
The camera of a or multiple lens 712 and imaging sensor 714.Imaging sensor 714 may include colour filter array (such as
Bayer filter), imaging sensor 714 can obtain the luminous intensity captured with each imaging pixel of imaging sensor 714 and wavelength
Information, and the one group of raw image data that can be handled by ISP processor 740 is provided.Sensor 720 (such as gyroscope) can be based on biography
The parameter (such as stabilization parameter) of the image procossing of acquisition is supplied to ISP processor 740 by 720 interface type of sensor.Sensor 720
Interface can use SMIA (Standard Mobile Imaging Architecture, Standard Mobile Imager framework) interface,
The combination of other serial or parallel camera interfaces or above-mentioned interface.
In addition, raw image data can also be sent to sensor 720 by imaging sensor 714, sensor 720 can be based on biography
Raw image data is supplied to ISP processor 740 to 720 interface type of sensor or sensor 720 deposits raw image data
It stores up in video memory 730.
ISP processor 740 handles raw image data pixel by pixel in various formats.For example, each image pixel can
Bit depth with 8,10,12 or 14 bits, ISP processor 740 can carry out raw image data at one or more images
Reason operation, statistical information of the collection about image data.Wherein, image processing operations can be by identical or different bit depth precision
It carries out.
ISP processor 740 can also receive image data from video memory 730.For example, 720 interface of sensor will be original
Image data is sent to video memory 730, and the raw image data in video memory 730 is available to ISP processor 740
It is for processing.Video memory 730 can be independent special in a part, storage equipment or electronic equipment of memory device
It with memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving from 714 interface of imaging sensor or from 720 interface of sensor or from video memory 730
When raw image data, ISP processor 740 can carry out one or more image processing operations, such as time-domain filtering.Treated schemes
As data can be transmitted to video memory 730, to carry out other processing before shown.ISP processor 740 is from image
Memory 730 receives processing data, and carries out in original domain and in RGB and YCbCr color space to the processing data
Image real time transfer.Treated that image data may be output to display 770 for ISP processor 740, for user's viewing and/or
It is further processed by graphics engine or GPU (Graphics Processing Unit, graphics processor).In addition, ISP processor
740 output also can be transmitted to video memory 730, and display 770 can read image data from video memory 730.?
In one embodiment, video memory 730 can be configured to realize one or more frame buffers.In addition, ISP processor 740
Output can be transmitted to encoder/decoder 760, so as to encoding/decoding image data.The image data of coding can be saved,
And it is decompressed before being shown in 770 equipment of display.Encoder/decoder 760 can be real by CPU or GPU or coprocessor
It is existing.
The statistical data that ISP processor 740 determines, which can be transmitted, gives control logic device Unit 750.For example, statistical data can wrap
Include the image sensings such as automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 712 shadow correction of lens
714 statistical information of device.Control logic device 750 may include the processor and/or micro-control for executing one or more routines (such as firmware)
Device processed, one or more routines can statistical data based on the received, determine the control parameter and ISP processor of imaging device 710
740 control parameter.For example, the control parameter of imaging device 710 may include 720 control parameter of sensor (such as gain, exposure
The time of integration, stabilization parameter of control etc.), camera flash control parameter, 712 control parameter of lens (such as focus or zoom
With focal length) or these parameters combination.ISP control parameter may include for automatic white balance and color adjustment (for example, in RGB
During processing) 712 shadow correction parameter of gain level and color correction matrix and lens.
Above- mentioned information processing method can be realized with image processing techniques in Fig. 7 in the present embodiment.
Any reference to memory, storage, database or other media used in this application may include non-volatile
And/or volatile memory.Suitable nonvolatile memory may include read-only memory (ROM), programming ROM (PROM),
Electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include arbitrary access
Memory (RAM), it is used as external cache.By way of illustration and not limitation, RAM is available in many forms, such as
It is static RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDR SDRAM), enhanced
SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM).
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
The limitation to the application the scope of the patents therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, without departing from the concept of this application, various modifications and improvements can be made, these belong to the guarantor of the application
Protect range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (10)
1. a kind of information processing method, which is characterized in that including:
Scene Recognition is carried out to image, obtains the tag along sort of described image;
The tag along sort for counting each image, according to the statistical number of tag along sort determine user identifier corresponding to user tag;
Information associated with the user tag is pushed to the user identifier.
2. the method according to claim 1, wherein the tag along sort of described image include scene classification label and
Target classification label.
3. according to the method described in claim 2, it is characterized in that, it is described to image carry out scene Recognition, obtain described image
Tag along sort, including:
Scene Recognition is carried out to described image, obtains the scene classification label of described image;
Target detection is carried out to described image, obtains the target classification label of described image;
Using the scene classification label and target classification label as the tag along sort of described image.
4. according to the method described in claim 2, it is characterized in that, it is described to image carry out scene Recognition, obtain described image
Tag along sort, including:
Scene classification and target detection are carried out to described image, obtain the scene classification label and target classification mark of described image
Label, using the scene classification label and target classification label as the tag along sort of described image.
5. the method according to claim 1, wherein the tag along sort of each image of the statistics, according to classification
The statistical number of label determines user tag corresponding to user identifier, including:
The tag along sort for counting each image in preset time, determines institute according to the statistical number of tag along sort in the preset time
State user tag corresponding to user identifier.
6. the method according to claim 1, wherein the method also includes:
Using the highest tag along sort of statistical number as user tag corresponding to the user identifier, pushed to the user identifier
Information associated with the user tag.
7. the method according to claim 1, wherein the method also includes:
Using the ratio of the statistical number of the tag along sort and the statistical number of all tag along sorts as the tag along sort
Weighted value;
Information associated with the tag along sort is pushed to the user identifier according to the weighted value.
8. a kind of information processing unit, which is characterized in that including:
Scene Recognition module obtains the tag along sort of described image for carrying out scene Recognition to image;
Label determining module determines user identifier according to the statistical number of tag along sort for counting the tag along sort of each image
Corresponding user tag;
Pushing module, for pushing information associated with the user tag to the user identifier.
9. a kind of electronic equipment, including memory and processor, computer program, the computer are stored in the memory
When program is executed by the processor, so that the processor executes the information processing as described in any one of claims 1 to 7
The step of method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method as described in any one of claims 1 to 7 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810585579.0A CN108875820A (en) | 2018-06-08 | 2018-06-08 | Information processing method and device, electronic equipment, computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810585579.0A CN108875820A (en) | 2018-06-08 | 2018-06-08 | Information processing method and device, electronic equipment, computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108875820A true CN108875820A (en) | 2018-11-23 |
Family
ID=64337819
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810585579.0A Pending CN108875820A (en) | 2018-06-08 | 2018-06-08 | Information processing method and device, electronic equipment, computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108875820A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919116A (en) * | 2019-03-14 | 2019-06-21 | Oppo广东移动通信有限公司 | Scene recognition method, device, electronic equipment and storage medium |
CN109995999A (en) * | 2019-03-14 | 2019-07-09 | Oppo广东移动通信有限公司 | Scene recognition method, device, electronic equipment and storage medium |
CN111124863A (en) * | 2019-12-24 | 2020-05-08 | 北京安兔兔科技有限公司 | Intelligent equipment performance testing method and device and intelligent equipment |
CN111684815A (en) * | 2019-11-15 | 2020-09-18 | 深圳海付移通科技有限公司 | Message pushing method and device based on video data and computer storage medium |
CN111684441A (en) * | 2019-11-15 | 2020-09-18 | 深圳海付移通科技有限公司 | Message pushing method and device based on image data and computer storage medium |
CN112187851A (en) * | 2020-08-10 | 2021-01-05 | 苏州天佑智能科技有限公司 | Multi-screen information pushing method and device based on 5G and edge calculation |
CN112328888A (en) * | 2020-11-20 | 2021-02-05 | Oppo广东移动通信有限公司 | Information recommendation method and device, server and storage medium |
CN113259299A (en) * | 2020-02-10 | 2021-08-13 | 华为技术有限公司 | Label management method, reporting method, data analysis method and device |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102207966A (en) * | 2011-06-01 | 2011-10-05 | 华南理工大学 | Video content quick retrieving method based on object tag |
CN102831202A (en) * | 2012-08-08 | 2012-12-19 | 中兴通讯股份有限公司 | Method and system for pushing recommended friends to users of social network site |
US8670649B1 (en) * | 2012-10-10 | 2014-03-11 | Hulu, LLC | Scene detection using weighting function |
CN104486680A (en) * | 2014-12-19 | 2015-04-01 | 珠海全志科技股份有限公司 | Video-based advertisement pushing method and system |
CN106204165A (en) * | 2016-08-11 | 2016-12-07 | 广州出益信息科技有限公司 | A kind of advertisement placement method and device |
CN106357517A (en) * | 2016-09-27 | 2017-01-25 | 腾讯科技(北京)有限公司 | Directional label generation method and device |
CN107360222A (en) * | 2017-06-30 | 2017-11-17 | 广东欧珀移动通信有限公司 | Merchandise news method for pushing, device, storage medium and server |
CN107370780A (en) * | 2016-05-12 | 2017-11-21 | 腾讯科技(北京)有限公司 | Media push methods, devices and systems based on internet |
US20180020243A1 (en) * | 2016-07-13 | 2018-01-18 | Yahoo Holdings, Inc. | Computerized system and method for automatic highlight detection from live streaming media and rendering within a specialized media player |
CN107645559A (en) * | 2017-09-30 | 2018-01-30 | 广东美的制冷设备有限公司 | Household electrical appliances information-pushing method, server, mobile terminal and storage medium |
CN107679552A (en) * | 2017-09-11 | 2018-02-09 | 北京飞搜科技有限公司 | A kind of scene classification method and system based on multiple-limb training |
CN107688637A (en) * | 2017-08-23 | 2018-02-13 | 广东欧珀移动通信有限公司 | Information-pushing method, device, storage medium and electric terminal |
CN107734142A (en) * | 2017-09-15 | 2018-02-23 | 维沃移动通信有限公司 | A kind of photographic method, mobile terminal and server |
CN107948326A (en) * | 2017-12-29 | 2018-04-20 | 暴风集团股份有限公司 | Commending contents adjustment method and device, electronic equipment, storage medium, program |
CN107948618A (en) * | 2017-12-11 | 2018-04-20 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and computer equipment |
-
2018
- 2018-06-08 CN CN201810585579.0A patent/CN108875820A/en active Pending
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102207966A (en) * | 2011-06-01 | 2011-10-05 | 华南理工大学 | Video content quick retrieving method based on object tag |
CN102831202A (en) * | 2012-08-08 | 2012-12-19 | 中兴通讯股份有限公司 | Method and system for pushing recommended friends to users of social network site |
US8670649B1 (en) * | 2012-10-10 | 2014-03-11 | Hulu, LLC | Scene detection using weighting function |
CN104486680A (en) * | 2014-12-19 | 2015-04-01 | 珠海全志科技股份有限公司 | Video-based advertisement pushing method and system |
CN107370780A (en) * | 2016-05-12 | 2017-11-21 | 腾讯科技(北京)有限公司 | Media push methods, devices and systems based on internet |
US20180020243A1 (en) * | 2016-07-13 | 2018-01-18 | Yahoo Holdings, Inc. | Computerized system and method for automatic highlight detection from live streaming media and rendering within a specialized media player |
CN106204165A (en) * | 2016-08-11 | 2016-12-07 | 广州出益信息科技有限公司 | A kind of advertisement placement method and device |
CN106357517A (en) * | 2016-09-27 | 2017-01-25 | 腾讯科技(北京)有限公司 | Directional label generation method and device |
CN107360222A (en) * | 2017-06-30 | 2017-11-17 | 广东欧珀移动通信有限公司 | Merchandise news method for pushing, device, storage medium and server |
CN107688637A (en) * | 2017-08-23 | 2018-02-13 | 广东欧珀移动通信有限公司 | Information-pushing method, device, storage medium and electric terminal |
CN107679552A (en) * | 2017-09-11 | 2018-02-09 | 北京飞搜科技有限公司 | A kind of scene classification method and system based on multiple-limb training |
CN107734142A (en) * | 2017-09-15 | 2018-02-23 | 维沃移动通信有限公司 | A kind of photographic method, mobile terminal and server |
CN107645559A (en) * | 2017-09-30 | 2018-01-30 | 广东美的制冷设备有限公司 | Household electrical appliances information-pushing method, server, mobile terminal and storage medium |
CN107948618A (en) * | 2017-12-11 | 2018-04-20 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and computer equipment |
CN107948326A (en) * | 2017-12-29 | 2018-04-20 | 暴风集团股份有限公司 | Commending contents adjustment method and device, electronic equipment, storage medium, program |
Non-Patent Citations (1)
Title |
---|
黄立威 等: "基于深度学习的推荐系统研究综述", 《计算机学报》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919116A (en) * | 2019-03-14 | 2019-06-21 | Oppo广东移动通信有限公司 | Scene recognition method, device, electronic equipment and storage medium |
CN109995999A (en) * | 2019-03-14 | 2019-07-09 | Oppo广东移动通信有限公司 | Scene recognition method, device, electronic equipment and storage medium |
CN111684815A (en) * | 2019-11-15 | 2020-09-18 | 深圳海付移通科技有限公司 | Message pushing method and device based on video data and computer storage medium |
CN111684441A (en) * | 2019-11-15 | 2020-09-18 | 深圳海付移通科技有限公司 | Message pushing method and device based on image data and computer storage medium |
CN111684815B (en) * | 2019-11-15 | 2021-06-25 | 深圳海付移通科技有限公司 | Message pushing method and device based on video data and computer storage medium |
CN111124863A (en) * | 2019-12-24 | 2020-05-08 | 北京安兔兔科技有限公司 | Intelligent equipment performance testing method and device and intelligent equipment |
CN111124863B (en) * | 2019-12-24 | 2024-02-13 | 北京安兔兔科技有限公司 | Intelligent device performance testing method and device and intelligent device |
CN113259299A (en) * | 2020-02-10 | 2021-08-13 | 华为技术有限公司 | Label management method, reporting method, data analysis method and device |
CN112187851A (en) * | 2020-08-10 | 2021-01-05 | 苏州天佑智能科技有限公司 | Multi-screen information pushing method and device based on 5G and edge calculation |
CN112328888A (en) * | 2020-11-20 | 2021-02-05 | Oppo广东移动通信有限公司 | Information recommendation method and device, server and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108777815A (en) | Method for processing video frequency and device, electronic equipment, computer readable storage medium | |
CN108875820A (en) | Information processing method and device, electronic equipment, computer readable storage medium | |
CN108764372B (en) | Construction method and device, mobile terminal, the readable storage medium storing program for executing of data set | |
CN108805103A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN108830208A (en) | Method for processing video frequency and device, electronic equipment, computer readable storage medium | |
CN108764208A (en) | Image processing method and device, storage medium, electronic equipment | |
WO2019233392A1 (en) | Image processing method and apparatus, electronic device, and computer-readable storage medium | |
CN108921040A (en) | Image processing method and device, storage medium, electronic equipment | |
CN108875619A (en) | Method for processing video frequency and device, electronic equipment, computer readable storage medium | |
CN108960290A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN108764370A (en) | Image processing method, device, computer readable storage medium and computer equipment | |
CN108810413A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN108959462A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN108875821A (en) | The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing | |
CN109063737A (en) | Image processing method, device, storage medium and mobile terminal | |
CN108961302B (en) | Image processing method, image processing device, mobile terminal and computer readable storage medium | |
CN110334635A (en) | Main body method for tracing, device, electronic equipment and computer readable storage medium | |
CN105407276A (en) | Photographing method and equipment | |
CN108984657A (en) | Image recommendation method and apparatus, terminal, readable storage medium storing program for executing | |
CN108765033B (en) | Advertisement information pushing method and device, storage medium and electronic equipment | |
CN108810418A (en) | Image processing method, device, mobile terminal and computer readable storage medium | |
CN108805198A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN109002843A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN108897786A (en) | Recommended method, device, storage medium and the mobile terminal of application program | |
CN108764321B (en) | Image-recognizing method and device, electronic equipment, storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181123 |
|
RJ01 | Rejection of invention patent application after publication |