WO2020211003A1 - Procédé de traitement d'image, support d'informations lisible par ordinateur et dispositif informatique - Google Patents

Procédé de traitement d'image, support d'informations lisible par ordinateur et dispositif informatique Download PDF

Info

Publication number
WO2020211003A1
WO2020211003A1 PCT/CN2019/083000 CN2019083000W WO2020211003A1 WO 2020211003 A1 WO2020211003 A1 WO 2020211003A1 CN 2019083000 W CN2019083000 W CN 2019083000W WO 2020211003 A1 WO2020211003 A1 WO 2020211003A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sample
terminal
target
user portrait
Prior art date
Application number
PCT/CN2019/083000
Other languages
English (en)
Chinese (zh)
Inventor
杨阳
林立安
刘金
Original Assignee
深圳市欢太科技有限公司
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市欢太科技有限公司, Oppo广东移动通信有限公司 filed Critical 深圳市欢太科技有限公司
Priority to PCT/CN2019/083000 priority Critical patent/WO2020211003A1/fr
Priority to CN201980090804.6A priority patent/CN113366420B/zh
Publication of WO2020211003A1 publication Critical patent/WO2020211003A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Definitions

  • This application relates to the field of imaging technology, in particular to an image processing method, non-volatile computer readable storage medium and computer equipment.
  • image resource acquisition methods include: searching for required image resources through search engines, applications, etc.; viewing image resources shared by other users through social networking sites, etc.
  • an image processing method a non-volatile computer-readable storage medium, and a computer device are provided.
  • An image processing method including:
  • One or more non-volatile computer-readable storage media containing computer-executable instructions when the computer-executable instructions are executed by one or more processors, cause the processors to perform the following operations:
  • a computer device includes a memory and a processor.
  • the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the processor is caused to perform the following operations:
  • the user portrait corresponding to the terminal can be obtained, and the matching target image can be sorted out from the picture library based on the user portrait and the first location information. Since the user portrait corresponding to the terminal reflects the image requirements of the terminal, an image that matches the location of the terminal and meets the personalized needs of the terminal can be obtained from the picture library, and the obtained target image is sent to the terminal.
  • the terminal is based on the target Images can get suitable shooting scenes, composition methods, etc., which can improve the accuracy of image push and meet the individual needs of different users.
  • Fig. 1 is an application environment diagram of an image processing method in an embodiment.
  • FIG. 2 is a flowchart of an image processing method in an embodiment.
  • Figure 3 is a flow chart of establishing a picture library in an embodiment.
  • Fig. 4 is a flowchart of training an image scoring model in an embodiment.
  • Figure 5 is a flowchart of establishing an image tag library in an embodiment.
  • Fig. 6 is a flowchart of screening target images from a picture library in an embodiment.
  • Fig. 7 is a flowchart of training an image recommendation model in an embodiment.
  • Fig. 8 is a structural block diagram of an image processing apparatus according to an embodiment.
  • Fig. 9 is a structural block diagram of an image processing device in another embodiment.
  • Fig. 10 is a schematic diagram of the internal structure of a server (or cloud, etc.) in an embodiment.
  • first, second, etc. used in this application can be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish the first element from another element.
  • first location information may be referred to as second location information
  • second location information may be referred to as first location information. Both the first location information and the first location information are location information, but they are not the same location information.
  • Fig. 1 is an application environment diagram of an image processing method in an embodiment.
  • the application environment includes a server 110 and a terminal 120.
  • the server 110 is provided with a picture library.
  • the server 110 can receive the first location information uploaded by the terminal 120 and obtain the user portrait corresponding to the terminal 120. Based on the user portrait and the first location information, the matching target image is filtered from the picture library, and the target The image is sent to the terminal 120.
  • the server 110 may be a single server, or a server cluster composed of multiple servers.
  • the terminal 120 may not be limited to various mobile phones, computers, portable devices, and the like.
  • FIG. 2 is a flowchart of an image processing method in an embodiment. As shown in FIG. 2, the image processing method includes operations 202 to 206.
  • Operation 202 Receive the first location information uploaded by the terminal, and obtain a user portrait corresponding to the terminal.
  • the first location information refers to specific location information on the map.
  • the first location information may be the current location information of the terminal.
  • the terminal may obtain the current longitude and latitude information of the terminal through GPS (Global Positioning System) or a mobile network, and obtain the first location information of the terminal according to the longitude and latitude information;
  • the first location information may also be the upcoming location information planned by the terminal.
  • the terminal may obtain the user’s itinerary information, and obtain the first location information according to the itinerary information; the first location information may also be the user’s input obtained by the terminal.
  • the first location information may be specific to a country, province, city, district or specific scenic spot, etc.
  • User portrait is a tool used to reflect the terminal's demand characteristics for images. Specifically, the user portrait may be generated based on the browsing information of the terminal, the operation information on the image, and the like.
  • the server receives the first location information uploaded by the terminal, and obtains the user portrait corresponding to the terminal. Specifically, the server stores the user portrait corresponding to each terminal, and the server may obtain the user portrait corresponding to the terminal when receiving the first location information uploaded by the terminal.
  • a matching target image is filtered from the picture library based on the user portrait and the first position information.
  • the image library contains sample images for screening.
  • the sample image can be an image uploaded by each terminal or an image of a third-party system.
  • the third-party system can be various systems with image sharing or storage functions, such as instagram, flickr, and Douban websites.
  • the sample images in the picture library may be obtained after screening the images uploaded by the terminal or the images of the third-party system.
  • the server may score the sample images, and store the sample images whose scores exceed the score threshold in the picture library.
  • the image uploaded by the terminal and the image of the third-party system may be stored in the image library in the form of information contained in the image, a score of the image, or a label corresponding to the image.
  • the server filters out matching target images from the picture library based on the user portrait and the first location information.
  • the target image can be one or more.
  • the image library may contain sample tags corresponding to the sample images, and the server may match the user portrait and the first location information with the information of the sample images contained in the image library, sample tags, etc., and set the highest matching degree or matching degree
  • the sample image corresponding to the sample label that exceeds the matching degree threshold is used as the target image; the server can also filter the target image based on the preset image recommendation model, and input the acquired user portrait and first position information into the image recommendation model to obtain the image recommendation model
  • the output target image when there is no target image matching the user portrait and the first location information in the picture library, the server may use one or more sample images with the highest score among the sample images contained in the picture library as the target image.
  • the target image is sent to the terminal.
  • the server can send the target image to the terminal.
  • the terminal can receive the target image sent by the server and display the target image. Further, the server may send the target images to the terminal in the order of the scores of the images, so that the terminal will display the target images received in the order of the corresponding scores from high to low; also according to the matching degree of the target images from high to low. Display the target image in the lowest order.
  • the embodiment provided in this application can receive the first location information uploaded by the terminal, obtain the user portrait corresponding to the terminal, filter out the matching target image from the picture library based on the user portrait and the first location information, and send the target image to terminal.
  • the user portrait corresponding to the terminal can be obtained, and the matching target image can be sorted out from the picture library based on the user portrait and the first location information. Since the user portrait corresponding to the terminal reflects the image requirements of the terminal, an image that matches the location of the terminal and meets the personalized needs of the terminal can be obtained from the picture library, and the obtained target image is sent to the terminal.
  • the terminal is based on the target Images can get suitable shooting scenes, composition methods, etc., which can improve the accuracy of image push and meet the individual needs of different users.
  • Figure 3 is a flow chart of establishing a picture library in an embodiment. As shown in Figure 3, in one embodiment, the provided image processing method further includes:
  • the sample image can be an image uploaded by each terminal or an image of a third-party system.
  • the third-party system can be various systems with image sharing or storage functions, such as websites such as instagram, flickr, and Douban.
  • the terminal can obtain the image uploaded by the user and send it to the server.
  • the second position information of the sample image is obtained, and the label information corresponding to the sample image is obtained according to the labels contained in the image label library, and the sample label corresponding to the sample image is obtained according to the second position information and the label information.
  • the sample image is an image collected by any terminal through a camera.
  • the terminal that collects sample images can be, but is not limited to, various mobile phones, computers, portable devices, and so on.
  • the second location information is the location information of the terminal that collects the sample image when collecting the sample image.
  • the server may obtain the latitude and longitude information of the sample image, and obtain the second location information according to the latitude and longitude information.
  • the longitude and latitude information can be obtained through GPS, mobile network, etc. when the terminal collects the sample image, and the terminal can store the obtained longitude and latitude information as the image information of the sample image.
  • the image tag library is a collection of tags that indicate the shooting characteristics of an image.
  • the shooting characteristics of the image may be, but are not limited to, the shooting scene, shooting time, shooting style, color, spatial relationship, and shape characteristics of the image.
  • the server can identify and analyze the sample image according to the labels contained in the image label library to obtain label information corresponding to the sample image.
  • the server can also build a label detection model in advance, and detect the sample image based on the label detection model to obtain label information corresponding to the sample image.
  • the server can obtain the sample label corresponding to the sample image according to the second location information and the label information, that is, the second location information and label information can be combined into a sample label of the sample image.
  • the corresponding sample label may be a Hawaiian seascape.
  • the label information is a seascape and a wedding photo
  • the corresponding sample label can be Hawaiian seascape wedding photos, etc.
  • the sample image is scored according to the image scoring model to obtain the image quality score of the sample image.
  • the image scoring model is a model for scoring images.
  • the image quality score is the score used to reflect the shooting quality of the image. Generally, the higher the image quality score, the better the image quality; conversely, the lower the image quality score, the worse the image quality.
  • the server may build an image scoring model for scoring images in advance, and score the sample image according to the image scoring model to obtain the image quality score of the sample image.
  • the sample label and the image quality score corresponding to the sample image are stored in the picture library.
  • the server obtains the sample label by obtaining the second location information and label information of the sample image, and after scoring the sample image according to the image scoring model to obtain the image quality score of the sample image, it can store the sample label and image quality score corresponding to the sample image to the picture In the library.
  • the server may also detect whether the image quality score corresponding to the sample image exceeds the score threshold; when the image quality score exceeds the score threshold, the sample label and image quality score corresponding to the sample image are stored in the picture library.
  • the score threshold can be set according to actual application requirements and is not limited here. For example, taking the highest image quality score of 100 as an example, the score threshold may be 70, 78, 85, 90, etc., but is not limited thereto.
  • the server may store the sample label and image quality score corresponding to the sample image in the picture library.
  • the server may also sort the sample images according to the image quality scores, thereby obtaining a certain number of sample images with image quality scores from high to low, and store the sample tags and image quality scores corresponding to the sample images in the picture library.
  • the model scores the sample image and obtains the image quality score of the sample image.
  • the sample label and image quality score corresponding to the sample image are stored in the picture library, and the picture library containing the image label and the image quality score can be obtained, which is the image
  • the push provides data support, and can filter matching target images according to the quality score of the image, which can improve the quality of the pushed image.
  • Fig. 4 is a flowchart of training an image scoring model in an embodiment. As shown in Figure 4, in one embodiment, the provided image processing method further includes:
  • Operation 402 Obtain a training image and a corresponding real score range, where the real score range is determined by scoring the training image according to the image scoring standard.
  • the real score range is determined by scoring the training image according to the image scoring standard.
  • the image scoring standard is determined by professional photographers based on the composition and shooting effect of the image.
  • the server may receive the determined image scoring standard, and obtain the real score range obtained by scoring the training image according to the scoring standard.
  • the server may also receive the real score obtained by scoring the training image according to the scoring standard, and determine the real score range according to the allowable error range. For example, when the true score is 80, if the allowable error range is 5 units, the true score range is 75 to 85.
  • the training image is input to the image scoring model to obtain the predicted quality score corresponding to the training image.
  • the server can train the image scoring model according to a deep learning algorithm in VGG (Visual Geometry Group), CNN (Convolutional Neural Network), SSD (single shot multibox detector), or decision tree (Decision Tree).
  • VGG Visual Geometry Group
  • CNN Convolutional Neural Network
  • SSD single shot multibox detector
  • decision tree Decision Tree
  • the image scoring model generally includes an input layer, a hidden layer and an output layer. The input layer is used to receive the input of the image, the hidden layer is used to process the received image, and the output layer is used to output the result of image processing, which is the predicted quality of the output image. fraction.
  • the server may adjust the parameters of the image scoring model according to the received image scoring standard, so that the image scoring model can process the output training image according to the adjusted parameters to obtain the predicted quality score of the image.
  • the server can obtain the loss function according to the real score range and the predicted quality score.
  • the scoring criteria may include multiple scoring items, such as composition, color, resolution, sharpness, etc.
  • the server may receive the true score range of each scoring item in the scoring criteria corresponding to the training image, and obtain the image scoring model pair After the training image is scored, the preset quality score of each scoring item is output. According to the real score range and preset quality score corresponding to each item, the item score loss function corresponding to each item is obtained, and the item score corresponding to each item is lost The function performs weighted summation to obtain the score loss function.
  • Operation 408 Adjust the parameters of the image scoring model according to the score loss function, and return to execute the operation of inputting the training image to the image scoring model to obtain the predicted quality score corresponding to the training image, and stop until the obtained predicted quality score is within the real score range .
  • the server adjusts the parameters of the image scoring model according to the score loss function.
  • the server can use the backpropagation algorithm to adjust the parameters of the image scoring model according to the score loss function to train the image scoring model, that is, repeat the execution of inputting the training image to the image score
  • the model obtains the predicted quality score corresponding to the training image.
  • the score loss function is obtained according to the real score range and the preset quality score, and the parameters of the image scoring model are adjusted until Stop when the predicted quality score is within the real score range.
  • the image scoring model is trained by using training images, the parameters of the image scoring model are adjusted according to the scoring criteria set by professional photographers, and the real score range obtained by scoring the training image based on the scoring criteria is compared with the predicted quality score According to the comparison result, the parameters of the image scoring model are further adjusted to obtain an image scoring model that can score images based on the scoring criteria, which can improve the accuracy of image scoring.
  • Figure 5 is a flowchart of establishing an image tag library in an embodiment. As shown in Figure 5, in one implementation, the provided image processing method further includes:
  • the server receives the label sent by the terminal to mark the target image.
  • the terminal may receive a tag marked on the target image input by the user, and send the tag marked on the target image to the server.
  • the target image can be displayed on the terminal.
  • the user can mark the target image according to the image content, image shooting effect, image shooting location, etc., and the terminal can obtain the tag entered by the user to mark the target image .
  • the confirmation instruction information for the tag is received, and the target tag is obtained from the received tags according to the confirmation instruction information.
  • the confirmation instruction information of the label may be sent to the server by the audit terminal.
  • the audit terminal refers to a terminal with tag audit authority.
  • the confirmation instruction information includes an instruction to pass the verification and/or an instruction to fail the verification of the label.
  • the server can send the received label to the audit terminal.
  • the audit terminal can display the received label and the corresponding target image, and obtain the audit user’s
  • the label confirmation instruction information is sent to the server, and the server can obtain the target label from the received labels according to the confirmation instruction information.
  • the server may also compare the received tags with the tags contained in the image tag library, and send tags that are not in the image tag library among the received tags to the review terminal for review.
  • the server may use the approved label among the received labels as the target label according to the confirmation instruction information.
  • the received tags include four tags of portrait, landscape, river, and lunch
  • the instruction information includes the approval instructions for the two tags of portrait and landscape
  • the portrait and landscape tags can be used as target tags.
  • the target tag is added to the image tag library.
  • the server can add the target tag to the image tag library. Therefore, when the server obtains the sample image, the tags contained in the added image tag library obtain the tag information corresponding to the sample image, which can improve the accuracy of the image tag information and enrich the image tag library.
  • the provided image processing method can also update the sample label of the sample image corresponding to the target image in the picture library according to the target label.
  • the server updates the sample label of the sample image corresponding to the target image in the picture library according to the target label.
  • the server may obtain a sample image corresponding to the target image in the picture library, and add the target label to the sample label of the sample image.
  • target image A corresponds to sample image B
  • the sample labels of sample image B are New Zealand, Queenstown, and hot air balloon
  • the server can base on the verification sent by the terminal.
  • the confirmation instruction information of the sunset and grass tags determines the sunset and grass as the target tags, and the sample tags of the sample image B can be updated to New Zealand, Queenstown, hot air balloon, sunset, and grass.
  • the label of the sample image can be improved and the accuracy of the image label can be improved.
  • Fig. 6 is a flowchart of screening target images from a picture library in an embodiment.
  • the process of selecting matching target images from a picture library based on user portrait and location information in the provided image processing method includes:
  • Operation 602 based on the image recommendation model, obtain a set of sample images matching the user portrait and the first location information from the picture library.
  • the server can build an image recommendation model in advance.
  • the image recommendation model may be constructed based on one or more of algorithms such as content recommendation algorithm, association rule recommendation algorithm, and collaborative filtering algorithm.
  • the server may obtain a set of sample images matching the user portrait and the first location information from the picture library based on the image recommendation model.
  • the image recommendation model can obtain the confidence of each sample image in the picture library based on the user portrait and the first location information, and the sample image with the confidence exceeding the confidence threshold can be added according to the preset confidence threshold in the image recommendation model
  • the server can obtain the image recommendation model and output the sample image collection.
  • the confidence level of the sample image refers to the credibility of matching the sample image with the user portrait and the first location information.
  • the confidence threshold may be set according to actual application requirements, for example, it may be 70%, 75%, 82%, 88%, etc., and is not limited thereto.
  • the sample images included in the sample image set are sorted according to the corresponding image quality scores.
  • the image library stores the image quality score corresponding to the sample image.
  • the server may sort the sample images contained in the sample image set according to the corresponding image quality scores.
  • the server may sort the sample images in the order of image quality scores from high to low, or may also sort the sample images in the order of image quality scores from low to high, which is not limited here.
  • a preset number of sample images are acquired from the sorted sample image set as target images.
  • the preset number can be set according to actual application requirements and is not limited here.
  • the preset number can be a certain number, for example, the preset number can be 2, 4, 5, 10, etc., but is not limited to this; the preset number is also determined according to the terminal, for example, the server can preset different terminal corresponding The preset number.
  • the preset number corresponding to the terminal preset by the server can be determined according to the number of images browsed by the terminal. The more the number of images browsed by the terminal, the larger the preset number; the preset number can also be based on the sample image collection The number of sample images included, etc.
  • the server obtains a preset number of sample images from the sorted sample image set as target images. Specifically, the sample images in the sorted sample image set are sorted by the size of the image quality score, and the server can obtain the top preset number of sample images as the target image according to actual application requirements, or the The number of sample images in the image collection and the preset number are obtained in sequential intervals with a preset number of sample images as target images.
  • Fig. 7 is a flowchart of training an image recommendation model in an embodiment. As shown in FIG. 7, in an embodiment, the provided image processing method further includes:
  • Operation 702 Obtain a training user portrait and a corresponding real recommended image, where the real recommended image is an image in a picture library.
  • the training user portrait is a terminal corresponding to any terminal stored in the server.
  • the server may use the portrait of the user with the most image requirements as the training user portrait; the server may also obtain the training user portrait determined by the audit terminal.
  • the real recommended image corresponding to the training user portrait may be determined by the review terminal based on the sample images contained in the picture library, and the review terminal may obtain the determination information input by the user with review authority on the real recommended image corresponding to the training user portrait.
  • the trained user portrait is input to the image recommendation model, and the predicted recommendation image and the corresponding confidence level selected by the image recommendation model from the image library are obtained.
  • the server may input the training user image to the image recommendation model, and obtain the predicted recommendation image and the corresponding confidence that the image recommendation model filters from the image library.
  • the server may use deep learning algorithms such as VGG, CNN, SSD, or decision tree to train the image recommendation model.
  • the server may adjust the parameters of the image recommendation model according to algorithms such as content recommendation algorithm, association rule recommendation algorithm, and collaborative filtering algorithm.
  • an image recommendation loss function is obtained based on the confidence of the real recommended image and the predicted recommended image.
  • the server may obtain the image recommendation loss function according to the confidence of the real recommended image and the predicted recommended image.
  • the image recommendation model may output the confidence level corresponding to each sample image, and the server may also obtain the confidence level of the sample image corresponding to the real recommendation image, according to the confidence level of the sample image corresponding to the real recommendation image. The confidence level generates the image recommendation loss function.
  • Operation 708 adjust the parameters of the image recommendation model based on the image recommendation loss function, and return to execute the operation of inputting the trained user portrait into the image recommendation model to obtain the predicted recommendation image and the corresponding confidence level filtered by the image recommendation model from the image library, until Stop when the obtained predicted recommended image matches the real recommended image.
  • the server adjusts the parameters of the image recommendation model according to the image recommendation loss function.
  • the server can use the backpropagation algorithm to adjust the parameters of the image recommendation model according to the image recommendation loss function, and train the image recommendation model, that is, repeat the execution and input the training user portrait
  • the predicted recommended image and the corresponding confidence level are obtained.
  • the image recommendation loss function is obtained according to the mismatch between the predicted recommended image and the real recommended image.
  • the parameter adjustment operation stops until the obtained predicted recommended image matches the real recommended image.
  • the image recommendation model is trained by using the training user portrait, and the image recommendation model is compared with the real recommended image based on the predicted recommended image output by the trained user portrait and the picture library. When there is no match, it is based on the actual recommended image and the predicted recommended image.
  • the confidence level generates an image recommendation loss function to adjust the parameters of the image recommendation model until the predicted recommendation image output by the image recommendation model matches the real recommendation image, and an image recommendation model that can accurately output the recommended image can be obtained.
  • the provided image processing method further includes: receiving the terminal's attention information on the target image; and updating the user portrait corresponding to the terminal according to the attention information.
  • the attention information is generated by the terminal according to the acquired operation on the target image.
  • the operations performed on the target image can be to mark the target image with like or dislike tags, view the large image of the target image, save the target image locally, share the target image with friends, reference the target image, etc. Limited to this.
  • the terminal can obtain the operation performed by the user on the target image, and generate the terminal's attention information on the target image according to the performed operation, and send it to the server.
  • the server updates the user portrait corresponding to the terminal according to the attention information on the target image uploaded by the terminal.
  • the terminal's attention information for images can reflect the terminal's demand for image push.
  • the server may update the user portrait corresponding to the terminal according to the operations performed on the target image and the number of operations included in the attention information. For example, when the operation of marking landscape images as favorites received by the terminal exceeds a certain number of times, it can be inferred that the terminal expects to receive landscape images, and the server can add the landscape images to the corresponding user portrait. Then, the server When screening the target image from the picture library according to the user portrait of the terminal and the first location information, the landscape image corresponding to the first location information may be sent to the terminal as the target image.
  • the server may also update the user portrait every preset time to meet the image requirements of the terminal in different time periods.
  • the accuracy of the user portrait corresponding to the terminal can be improved, and the target image can be filtered from the picture library based on the user portrait and the first location information, which can improve The accuracy of the target image.
  • the process of screening matching target images from the picture library based on the user portrait and the first location information includes: obtaining the target image from the picture library based on the user portrait and the first location information Information; Find the target image from the object storage service based on the target image information.
  • Object storage services refer to object-based storage services.
  • the storage object of the object storage server is an image.
  • the server has a corresponding storage space in the object storage service.
  • the sample image is stored in the corresponding storage space in the object storage service.
  • the object storage service uniquely identifies images through image information.
  • the image information may be one or more of the image name, the sample label of the image, the image size, and the image format.
  • the server can filter out the target image information matching the user portrait of the terminal and the first position information based on the sample tag and the second location information of the sample image contained in the picture library, and search the object storage service according to the target image information
  • the corresponding target image That is, the picture library is used to store the image information of the sample image, and the sample image is stored in the object storage service.
  • the server can find the corresponding target image from the object storage service according to the filtered target image information and send it to the terminal, which can save server storage Space consumption.
  • the process of sending the target image to the terminal may include: sending the target image to the user identification terminal based on the content distribution network provided by the object storage service.
  • Content distribution network refers to a distributed network composed of edge node server groups distributed in different areas provided by object storage services.
  • the content distribution network can cache the sample images stored in the object storage service to the edge node.
  • the server finds the target image from the object storage service, the content distribution network can send the cached target image from the edge node to the terminal.
  • the server may also store the image uploaded by the terminal through the object storage service.
  • the content distribution network provided by the object storage service may also send the uploaded image cached by the edge node to the terminal.
  • the sample images are cached in different regions through the content distribution network provided by the object storage service.
  • the server sends an image to the terminal or the terminal requests the server to view the image
  • the image cached by the content distribution network at the edge node can be sent to the terminal, which can improve The access speed of the image.
  • an image processing method is provided, and the process of implementing the method is as follows:
  • the server receives the first location information uploaded by the terminal, and obtains the user portrait corresponding to the terminal.
  • the server obtains the sample image, obtains the second position information of the sample image, and obtains the label information corresponding to the sample image according to the labels contained in the image label library, and obtains the sample label corresponding to the sample image according to the second position information and the label information,
  • the sample image is scored according to the image scoring model to obtain the image quality score of the sample image, and the sample label and image quality score corresponding to the sample image are stored in the picture library.
  • the server detects whether the image quality score corresponding to the sample image exceeds the score threshold; when the image quality score exceeds the score threshold, the sample label and the image quality score corresponding to the sample image are stored in the picture library.
  • the server obtains the training image and the corresponding real score range, where the real score range is determined by scoring the training image according to the image scoring standard, and the training image is input to the image scoring model to obtain the predicted quality score corresponding to the training image.
  • the predicted quality score is not within the real score range
  • obtain the score loss function according to the real score range and the predicted quality score adjust the parameters of the image scoring model according to the score loss function, and return to execution.
  • the operation of the corresponding predicted quality score stops until the obtained predicted quality score is within the real score range.
  • the server filters out matching target images from the picture library based on the user portrait and the first location information.
  • the server obtains a sample image set matching the user portrait and the first location information from the image library based on the image recommendation model, and sorts the sample images contained in the sample image set according to the corresponding image quality scores, A preset number of sample images are acquired from the sample image set as target images.
  • the server obtains the training user portrait and the corresponding real recommendation image, where the real recommendation image is an image in the image library, and the training user portrait is input to the image recommendation model to obtain the predicted recommendation filtered by the image recommendation model from the image library Image and the corresponding confidence.
  • the image recommendation loss function is obtained based on the confidence of the real recommended image and the predicted recommended image, and the parameters of the image recommendation model are adjusted based on the image recommendation loss function, and Return and execute the operation of inputting the trained user portrait into the image recommendation model to obtain the predicted recommended images and the corresponding confidences filtered by the image recommendation model from the image library, until the obtained predicted recommended images match the real recommended images.
  • the server obtains the target image information from the picture library based on the user portrait and the first location information; and searches for the target image from the object storage service according to the target image information.
  • the server sends the target image to the terminal.
  • the server when the server receives the label for marking the target image sent by the terminal, and when receiving the confirmation instruction information for the label, it obtains the target label from the received labels according to the confirmation instruction information, and adds the target label to the image label library.
  • the server updates the sample label of the sample image corresponding to the target image in the picture library according to the target label.
  • the server receives the attention information of the terminal on the target image; and updates the user portrait corresponding to the terminal according to the attention information.
  • the server sends the target image to the user identification terminal based on the content distribution network provided by the object storage service.
  • FIG. 8 is a structural block diagram of an image processing apparatus according to an embodiment.
  • an image processing device includes: an acquisition module 802, a screening module 804, and a sending module 806. among them:
  • the obtaining module 802 is configured to receive the first location information uploaded by the terminal and obtain a user portrait corresponding to the terminal.
  • the screening module 804 is used to screen out matching target images from the picture library based on the user portrait and the first location information.
  • the sending module 806 is used to send the target image to the terminal.
  • the image processing device provided by the embodiment provided in this application is used to receive the first location information uploaded by the terminal, and obtain the user portrait corresponding to the terminal, and filter out matching target images from the picture library based on the user portrait and the first location information To send the target image to the terminal. Since the target image can be filtered out based on the user portrait and location information corresponding to the terminal and sent to the terminal, the accuracy of image push can be improved and the individual needs of different users can be met.
  • Fig. 9 is a structural block diagram of an image processing device in another embodiment.
  • the provided image processing device further includes a picture library establishment module 808, which is used to obtain sample images; obtain second location information of the sample images, and according to the image tag library
  • the included label obtains the label information corresponding to the sample image, and obtains the sample label corresponding to the sample image according to the second location information and the label information; scores the sample image according to the image scoring model to obtain the image quality score of the sample image;
  • the sample label and image quality score are stored in the picture library.
  • the picture library establishment module 808 can also be used to detect whether the image quality score corresponding to the sample image exceeds the score threshold; when the image quality score exceeds the score threshold, the sample label and image quality score corresponding to the sample image are stored To the picture library.
  • the provided image processing device further includes a model building module 810, which is used to obtain the training image and the corresponding real score range, wherein the real score range is determined by scoring the training image according to the image scoring standard ; Input the training image to the image scoring model to obtain the predicted quality score corresponding to the training image; when the predicted quality score is not within the real score range, obtain the score loss function according to the real score range and the predicted quality score; adjust the image score according to the score loss function Parameters of the model, and return to execute the operation of inputting the training image to the image scoring model to obtain the predicted quality score corresponding to the training image, and stop until the obtained predicted quality score is within the real score range.
  • a model building module 810 which is used to obtain the training image and the corresponding real score range, wherein the real score range is determined by scoring the training image according to the image scoring standard ; Input the training image to the image scoring model to obtain the predicted quality score corresponding to the training image; when the predicted quality score is not within the real score range, obtain the score loss function
  • the provided image processing device further includes a tag library update module 812, which is used to receive the tag sent by the terminal to mark the target image, and when receiving the confirmation instruction information for the tag, according to the confirmation The instruction information obtains the target tag from the received tags, and adds the target tag to the image tag library.
  • a tag library update module 812 which is used to receive the tag sent by the terminal to mark the target image, and when receiving the confirmation instruction information for the tag, according to the confirmation The instruction information obtains the target tag from the received tags, and adds the target tag to the image tag library.
  • the picture library establishing module 808 may also be used to update the sample label of the sample image corresponding to the target image in the picture library according to the target label.
  • the screening module 804 can also be used to obtain a sample image set matching the user portrait and the first position information from the picture library based on the image recommendation model; and the sample images contained in the sample image set are determined according to the corresponding image quality.
  • the score is sorted; a preset number of sample images are obtained from the sorted sample image set as the target image.
  • the model building module 810 can also be used to obtain the training user portrait and the corresponding real recommended image, where the real recommended image is an image in the picture library; the training user portrait is input to the image recommendation model to obtain image recommendation
  • the model selects the predicted recommended images from the image library and the corresponding confidence; when the predicted recommended image does not match the real recommended image, the image recommendation loss function is obtained based on the confidence of the real recommended image and the predicted recommended image; based on the image recommendation loss
  • the function adjusts the parameters of the image recommendation model, and returns to execute the operation of inputting the trained user portrait into the image recommendation model to obtain the predicted recommendation image and the corresponding confidence level filtered by the image recommendation model from the image library, until the obtained predicted recommendation image is true It is recommended to stop when the image matches.
  • the provided image processing apparatus further includes a user portrait update module 814, which is configured to receive the terminal's attention information on the target image; and update the user portrait corresponding to the terminal according to the attention information.
  • a user portrait update module 814 which is configured to receive the terminal's attention information on the target image; and update the user portrait corresponding to the terminal according to the attention information.
  • the filtering module 804 can also be used to obtain target image information from the picture library based on the user portrait and location information, and to find the target image from the object storage service according to the target image information; the sending module 806 can also be used to The content distribution network provided by the storage service sends the target image to the terminal.
  • each module in the image processing device is only used for illustration. In other embodiments, the image processing device can be divided into different modules as needed to complete all or part of the functions of the image processing device.
  • a computer device is provided, and the computer device may be a server or a cloud.
  • Fig. 10 is a schematic diagram of the internal structure of a server (or cloud) in an embodiment.
  • the server includes a processor and a memory connected through a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire server.
  • the memory may include a non-volatile storage medium and internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the computer program can be executed by a processor to implement an image processing method provided in the following embodiments.
  • the internal memory provides a cached operating environment for the operating system computer program in the non-volatile storage medium.
  • the server can be implemented as an independent server or a server cluster composed of multiple servers.
  • FIG. 10 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the server to which the solution of the present application is applied.
  • the specific server may include More or fewer components are shown in the figure, or some components are combined, or have different component arrangements.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • a computer program is stored thereon, which is characterized in that, when the computer program is executed by a processor, the above-mentioned image processing method is realized.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM), which acts as external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un procédé de traitement d'image, consistant : à recevoir des premières informations d'emplacement téléversées par un terminal, et à acquérir un portrait d'utilisateur correspondant au terminal ; sur la base du portrait d'utilisateur et des informations d'emplacement, à sélectionner une image cible correspondante à partir d'une bibliothèque d'images ; et à envoyer l'image cible au terminal.
PCT/CN2019/083000 2019-04-17 2019-04-17 Procédé de traitement d'image, support d'informations lisible par ordinateur et dispositif informatique WO2020211003A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/083000 WO2020211003A1 (fr) 2019-04-17 2019-04-17 Procédé de traitement d'image, support d'informations lisible par ordinateur et dispositif informatique
CN201980090804.6A CN113366420B (zh) 2019-04-17 2019-04-17 图像处理方法、计算机可读存储介质和计算机设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/083000 WO2020211003A1 (fr) 2019-04-17 2019-04-17 Procédé de traitement d'image, support d'informations lisible par ordinateur et dispositif informatique

Publications (1)

Publication Number Publication Date
WO2020211003A1 true WO2020211003A1 (fr) 2020-10-22

Family

ID=72836867

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/083000 WO2020211003A1 (fr) 2019-04-17 2019-04-17 Procédé de traitement d'image, support d'informations lisible par ordinateur et dispositif informatique

Country Status (2)

Country Link
CN (1) CN113366420B (fr)
WO (1) WO2020211003A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907627A (zh) * 2021-02-07 2021-06-04 公安部第三研究所 实现小样本目标精准跟踪的系统、方法、装置、处理器及其计算机可读存储介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103412953A (zh) * 2013-08-30 2013-11-27 苏州跨界软件科技有限公司 基于增强现实的社交方法
CN104731880A (zh) * 2015-03-09 2015-06-24 小米科技有限责任公司 图片排序方法和装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10142396B2 (en) * 2015-12-15 2018-11-27 Oath Inc. Computerized system and method for determining and communicating media content to a user based on a physical location of the user
CN109146856A (zh) * 2018-08-02 2019-01-04 深圳市华付信息技术有限公司 图像质量评定方法、装置、计算机设备及存储介质
CN109063778A (zh) * 2018-08-09 2018-12-21 中共中央办公厅电子科技学院 一种图像美学质量确定方法及系统
CN109344855B (zh) * 2018-08-10 2021-09-24 华南理工大学 一种基于排序引导回归的深度模型的人脸美丽评价方法
CN109461167B (zh) * 2018-11-02 2020-07-21 Oppo广东移动通信有限公司 图像处理模型的训练方法、抠图方法、装置、介质及终端
CN109522950B (zh) * 2018-11-09 2022-04-22 网易传媒科技(北京)有限公司 图像评分模型训练方法及装置和图像评分方法及装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103412953A (zh) * 2013-08-30 2013-11-27 苏州跨界软件科技有限公司 基于增强现实的社交方法
CN104731880A (zh) * 2015-03-09 2015-06-24 小米科技有限责任公司 图片排序方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907627A (zh) * 2021-02-07 2021-06-04 公安部第三研究所 实现小样本目标精准跟踪的系统、方法、装置、处理器及其计算机可读存储介质
CN112907627B (zh) * 2021-02-07 2024-02-02 公安部第三研究所 实现小样本目标精准跟踪的系统、方法、装置、处理器及其计算机可读存储介质

Also Published As

Publication number Publication date
CN113366420B (zh) 2024-05-03
CN113366420A (zh) 2021-09-07

Similar Documents

Publication Publication Date Title
US11778028B2 (en) Automatic image sharing with designated users over a communication network
US10909425B1 (en) Systems and methods for mobile image search
US9727565B2 (en) Photo and video search
US8923570B2 (en) Automated memory book creation
US8995716B1 (en) Image search results by seasonal time period
CN109074358A (zh) 提供与用户兴趣有关的地理位置
CN102591868B (zh) 用于拍照指南自动生成的系统和方法
US20220383053A1 (en) Ephemeral content management
US9473614B2 (en) Systems and methods for incorporating a control connected media frame
WO2019218459A1 (fr) Procédé de stockage de photos, support de stockage, serveur, et appareil
US9665773B2 (en) Searching for events by attendants
WO2019171803A1 (fr) Dispositif de recherche d'image, procédé de recherche d'image, équipement électronique et procédé de commande
TWI613550B (zh) 相片及視頻分享
US10885619B2 (en) Context-based imagery selection
WO2020211003A1 (fr) Procédé de traitement d'image, support d'informations lisible par ordinateur et dispositif informatique
Kim et al. Photo cube: an automatic management and search for photos using mobile smartphones
US12001475B2 (en) Mobile image search system
JP2014078064A (ja) 画像表示のための装置、方法及びプログラム
CN111125407A (zh) 地址信息存储方法、存储装置、介质及电子设备
TW201941075A (zh) 快速影像排序方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19925018

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.02.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19925018

Country of ref document: EP

Kind code of ref document: A1