CN114092495A - Image display method, electronic device, storage medium, and program product - Google Patents

Image display method, electronic device, storage medium, and program product Download PDF

Info

Publication number
CN114092495A
CN114092495A CN202111430152.1A CN202111430152A CN114092495A CN 114092495 A CN114092495 A CN 114092495A CN 202111430152 A CN202111430152 A CN 202111430152A CN 114092495 A CN114092495 A CN 114092495A
Authority
CN
China
Prior art keywords
image
displayed
cutting
video
cut
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111430152.1A
Other languages
Chinese (zh)
Other versions
CN114092495B (en
Inventor
朱益
马国会
蒋中伦
秦京
张勇
王欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202111430152.1A priority Critical patent/CN114092495B/en
Publication of CN114092495A publication Critical patent/CN114092495A/en
Application granted granted Critical
Publication of CN114092495B publication Critical patent/CN114092495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides an image display method, electronic equipment, a storage medium and a program product. The image display method comprises the following steps: carrying out display direction detection, target object detection and image content symmetry detection on an image to be displayed; determining a cutting rule corresponding to the image to be displayed according to the display direction detection result and the symmetry detection result of the image to be displayed, wherein the cutting rule is used for indicating the display direction of the cut image and indicating the symmetry rule which the cut image conforms to; determining an image area where the target object is located according to a target object detection result of the image to be displayed, cutting the image to be displayed according to a cutting rule based on the image area, obtaining a cut image, and displaying the cut image. According to the scheme provided by the embodiment of the application, the cut image displayed for the user is more attractive and more prominent.

Description

Image display method, electronic device, storage medium, and program product
Technical Field
The embodiment of the application relates to the technical field of internet, in particular to an image display method, electronic equipment, a storage medium and a program product.
Background
Along with the development of internet technology, intelligent equipment makes each aspect of people's life more and more convenient intelligence. For example, the nice moments in life can be automatically collected and an album can be formed through an intelligent terminal placed at home, so that the user can view the photos at any time.
However, when the photos are displayed, the photos can be directly displayed to the user only in the shooting order, the time order, or the like, and the aesthetic appearance is poor. Therefore, how to better show the shot photos to the user becomes an urgent technical problem to be solved.
Disclosure of Invention
In view of the above, embodiments of the present application provide an image display scheme to at least partially solve the above problems.
According to a first aspect of the embodiments of the present application, there is provided an image display method, including: carrying out display direction detection, target object detection and image content symmetry detection on an image to be displayed; determining a cutting rule corresponding to the image to be displayed according to the display direction detection result and the symmetry detection result of the image to be displayed, wherein the cutting rule is used for indicating the display direction of the cut image and indicating the symmetry rule which the cut image conforms to; and determining an image area where a target object is located according to the target object detection result of the image to be displayed, cutting the image to be displayed according to the cutting rule based on the image area, obtaining a cut image, and displaying the cut image.
According to a second aspect of the embodiments of the present application, there is provided an image display method, including: responding to the cutting operation of a user, carrying out display direction detection, target object detection and symmetrical detection of image content on an image to be cut, and determining a cutting rule corresponding to the image to be displayed according to a display direction detection result and a symmetrical detection result of the image to be displayed, wherein the cutting rule is used for indicating the display direction of the image to be cut and indicating the symmetrical rule which the image to be cut conforms to; displaying a cutting preview, wherein the cutting preview comprises a cutting identifier corresponding to the cutting rule, an image area where a target object is located and a cutting area determined by the cutting rule according to a target object detection result; and responding to the determined operation of the user, cutting the image to be cut according to the cutting preview image, and displaying the cut image in an interface.
According to a third aspect of embodiments of the present application, there is provided an electronic apparatus, including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus; the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the image display method according to the first aspect.
According to a fourth aspect of embodiments of the present application, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method according to the first aspect.
According to a fifth aspect of embodiments herein, there is provided a computer program product comprising computer instructions for instructing a computing device to perform operations corresponding to the method according to the first aspect.
According to the scheme provided by the embodiment of the application, the cutting rule is determined based on the display direction detection result and the symmetry detection result of the image to be displayed, the cutting rule can be used for indicating the display direction of the image after cutting and indicating the symmetry rule which the image after cutting conforms to, the image after cutting according to the cutting rule is more attractive, the target object which needs to be emphasized in the image can be identified through target object detection, the corresponding cutting scheme can be executed for cutting, and the image after cutting displayed for a user is more attractive and the emphasis is more prominent. In addition, because the cutting rule is determined by adopting the symmetrical result detection, when symmetrical objects such as landscapes, buildings, portraits and the like or objects with special styles are displayed by the scheme provided by the embodiment, the cut images have the aesthetic characteristics of balance, stability and response.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a schematic diagram of an exemplary system to which an image presentation method of an embodiment of the present application may be applied;
FIG. 2 is a flowchart illustrating steps of an image displaying method according to an embodiment of the present application;
FIG. 3 is a flow chart of steps of a method of symmetry detection of image content for an image to be displayed;
FIG. 4 is a flow chart of steps of an image presentation method according to another embodiment of the present application;
fig. 5 is a schematic view of a usage scenario provided in an embodiment of the present application;
FIG. 6 is a schematic illustration of an interface display provided by an embodiment of the present application;
FIG. 7 is a flow chart illustrating steps of a video generation method according to the present application;
FIG. 8 is a schematic diagram of another exemplary system to which an image presentation method of an embodiment of the present application may be applied;
FIG. 9A is a flowchart of the steps of an image presentation method according to another embodiment of the present application;
FIG. 9B is a cut preview image according to an embodiment of the present application;
FIG. 10A is a flow chart illustrating steps of a video generation method according to the present application;
fig. 10B is a scene schematic diagram of a video generation method according to the present application;
fig. 11 is a schematic structural diagram of an electronic device according to yet another embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present application, the technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application shall fall within the scope of the protection of the embodiments in the present application.
The following further describes specific implementations of embodiments of the present application with reference to the drawings of the embodiments of the present application.
The application relates to an image presentation scheme, wherein the presented image can be an image uploaded by a user or an automatically shot image. For example, the mobile phone supporting shooting or the IOT supporting shooting is placed indoors or outdoors, the shot image can be uploaded to the cloud corresponding to the user after the shooting is performed through the device, or the image of the user can be collected instantly through the device supporting automatic shooting, and the collected image is uploaded to the cloud.
The user can view the image of the cloud through the user equipment.
The user equipment can be a mobile phone, a PAD and a computer, the user equipment can be connected with the server, and images for a user to view are stored in the user equipment locally or through the server.
Fig. 1 illustrates an exemplary system to which the image presentation method according to the embodiment of the present application is applied.
As shown in fig. 1, the system 100 may include a server 102, a communication network 104, and/or one or more user devices 106, illustrated in fig. 1 as a plurality of user devices.
Server 102 may be any suitable server for storing information, data, programs, and/or any other suitable type of content. In some embodiments, server 102 may perform any suitable functions. For example, in some embodiments, server 102 may be used to provide image cropping, image classification, and the like.
In some embodiments, the communication network 104 may be any suitable combination of one or more wired and/or wireless networks. For example, the communication network 104 can include any one or more of the following: the network may include, but is not limited to, the internet, an intranet, a Wide Area Network (WAN), a Local Area Network (LAN), a wireless network, a Digital Subscriber Line (DSL) network, a frame relay network, an Asynchronous Transfer Mode (ATM) network, a Virtual Private Network (VPN), and/or any other suitable communication network. The user device 106 can be connected to the communication network 104 by one or more communication links (e.g., communication link 112), and the communication network 104 can be linked to the server 102 via one or more communication links (e.g., communication link 114). The communication link may be any communication link suitable for communicating data between the user device 106 and the server 102, such as a network link, a dial-up link, a wireless link, a hardwired link, any other suitable communication link, or any suitable combination of such links.
The user device 106 may comprise a user device adapted to present images. In some embodiments, user devices 106 may comprise any suitable type of device. For example, in some embodiments, the user device 106 may include a mobile device, a tablet computer, a laptop computer, a desktop computer, a wearable computer, a game console, a media player, a vehicle entertainment system, and/or any other suitable type of user device. Note that in some embodiments, the user device 106 may additionally or alternatively be used to implement any of the functionality described in connection with the subsequent method embodiments to present images.
Although server 102 is illustrated as one device, in some embodiments, any suitable number of devices may be used to perform the functions performed by server 102. For example, in some embodiments, the user device 106 may be used to implement the functions performed by the server 102. Alternatively, the functionality of the server 102 may be implemented using a cloud service.
Based on the above system, the present application provides an image displaying method, which is described below with reference to a plurality of embodiments.
In some implementations of the present application, referring to fig. 2, a flowchart of steps of an image displaying method is provided, including:
21. and carrying out display direction detection, target object detection and image content symmetry detection on the image to be displayed.
In this embodiment, the display direction detection is used to detect whether the display direction of the image is horizontal or vertical; the target object detection is used for detecting a target object displayed in an image in a key mode and the position of the target object in the image; the symmetry detection of the image content is used to detect whether the image content is symmetric and to detect the position of the symmetry axis or the symmetry center.
For a specific detection method, reference may be made to related technologies, which are not described herein again.
22. And determining a cutting rule corresponding to the image to be displayed according to the display direction detection result and the symmetry detection result of the image to be displayed.
The cutting rule is used for indicating the display direction of the cut image and indicating the symmetry rule which the cut image accords with.
For example, if the display direction of the image is determined to be the horizontal direction according to the display direction detection result, the corresponding cropping rule may be determined to be that the display direction of the cropped image is the horizontal direction, that is, the height of the cropped image is smaller than the length; if the image content is determined to be symmetric based on the symmetry-detection result, it may be determined that the corresponding cropping-rule symmetry axis is the central axis of the cropped image, or the symmetry axis is at 2/5 of the cropped image.
23. And determining an image area where a target object is located according to the target object detection result of the image to be displayed, cutting the image to be displayed according to the cutting rule based on the image area, obtaining a cut image, and displaying the cut image.
For example, in this embodiment, an image area where the target object is located may be determined according to a target object detection result of the image to be displayed, and the image to be displayed is cut according to the cutting rule based on the image area to obtain a cut image, for example, when the image is cut, it may be ensured that the symmetry axis is a central axis of the cut image, the cut image is cut with one boundary of the image area as a side, and the cut image is displayed horizontally.
According to the scheme provided by the embodiment, the cutting rule is determined based on the display direction detection result and the symmetry detection result of the image to be displayed, the image cut by the cutting rule is more attractive, the target object needing key points in the image can be identified through target object detection, and the corresponding cutting scheme can be executed for cutting, so that the image displayed to a user after cutting is more attractive and the key points are more prominent. In addition, because the cutting rule is determined by adopting the symmetrical result detection, when symmetrical objects such as landscapes, buildings, portraits and the like or objects with special styles are displayed by the scheme provided by the embodiment, the cut images have the aesthetic characteristics of balance, stability and response.
In some implementations of the present application, referring to fig. 3, a method for performing symmetry detection of image content on an image to be displayed includes: horizontally overturning an image to be displayed to obtain a first overturned image, calculating a first similarity between the first overturned image and the image to be displayed, and determining a vertical symmetry axis of image content of the image to be displayed according to the first similarity; or, vertically turning the image to be displayed to obtain a second turned image, calculating a second similarity between the second turned image and the image to be displayed, and determining a horizontal symmetry axis of the image content of the image to be displayed according to the first similarity.
For example, referring to fig. 3, for image a to be displayed, image a may be flipped vertically (i.e., flipped up and down) to obtain image a1, and flipped horizontally (i.e., flipped left and right) to obtain image a 2.
For image a, image a1, image a2, feature vectors of the three images can be extracted each by a convolutional neural network model. A first similarity of image A to image A1 may then be calculated S1, and a second similarity of image A to image A2 may be calculated S2.
If the first similarity S1 is greater than the similarity threshold, it may be determined that image a is symmetric about the x-axis, i.e., image a has a symmetry axis in the horizontal direction; if the second similarity S2 is greater than the similarity threshold, it may be determined that image a is symmetric about the y-axis, i.e., image a has a symmetry axis in the vertical direction.
If the first similarity S1 and the second similarity S2 are both greater than the similarity threshold, it may be determined that the image a is centrosymmetric; on the contrary, if both the first similarity S1 and the second similarity S2 are less than or equal to the similarity threshold, the image a is asymmetric.
After the image A is determined to be symmetrical, the generated cutting rule corresponding to the image A comprises the following steps: and taking the symmetry axis as a central axis of the image after cutting or taking the symmetry center as the center of the image after cutting. Thereby, the displayed image may be made to conform to symmetrical aesthetics.
The image presentation method of the present embodiment may be performed by any suitable electronic device with data processing capabilities, including but not limited to: server, mobile terminal (such as mobile phone, PAD, etc.), PC, etc.
In other specific implementations of the present application, referring to fig. 4, an embodiment of the present application is applied to an image displaying method, including:
41. and carrying out display direction detection, target object detection and image content symmetry detection on the image to be displayed.
The specific implementation manner of this step can refer to the above embodiments, and is not described herein again.
42. And determining a display direction matched with the target object in the image to be displayed according to the target object detection result of the image to be displayed.
In this embodiment, for a target object in an image to be displayed, a display direction adapted to the target object may be determined according to a detection result. For example, if the target object is an object with a height greater than a width, such as a pavilion or a tower, the display direction adapted to the target object may be determined to be a longitudinal direction; or, if the target object is a ship or an object with a width larger than a height, such as a large number of people, the display direction adapted to the target object can be determined to be the transverse direction.
43. And determining a cutting rule for indicating the display direction of the cut image according to the display direction adapted to the target object and the display direction detection result of the image to be displayed, and determining a cutting rule for indicating the symmetrical rule which the cut image accords with according to the symmetrical detection result.
In this embodiment, the display direction of the cut image may be determined according to the display direction adapted to the target object and the display direction of the image to be displayed; and determining the symmetry axis or the symmetry center as the central axis or the central point of the cut image according to the symmetry detection result.
44. And determining an image area where a target object is located according to the target object detection result of the image to be displayed, and cutting the image to be displayed according to the cutting rule based on the image area to obtain a cut image.
In this embodiment, the cutting rule of the above steps mainly includes: the display direction, the central axis or the central point of the cut image, etc. In this step, the clipping position of the image to be displayed may be determined based on the determined clipping rule according to the image area where the target object is located. And then, cutting can be carried out based on the cutting position to obtain a cut image.
In addition, in this embodiment, the image to be displayed may be input to the trained neural network model, the neural network model performs display direction detection, target object detection, and symmetry detection of image content on the image to be displayed, determines a corresponding clipping rule according to the detection result, determines a clipping scheme of the image to be displayed in combination with the clipping rule and the image area where the target object is located, and outputs a clipping position of the image to be displayed. And then, cutting can be carried out based on the cutting position to obtain a cut image.
For example, referring to fig. 5, the image to be displayed may be an image of a tower taken by a user through a mobile phone.
The display direction of the image can be determined to be a longitudinal display by detecting the display direction of the image shown on the left side of the image in FIG. 5; the height of the target object in the image can be determined to be larger than the width through target object detection on the image, and the image is suitable for longitudinal display; the image content is detected based on the symmetry of the image content by the image to be displayed. The comprehensively determined clipping scheme may be: the cropped image is displayed longitudinally, and the longitudinal symmetry axis is partially overlapped with the longitudinal center line of the cropped image.
With the upper left corner of the image to be displayed as a zero point, four vertices of the image area where the tower is located, namely, the top left vertex, the bottom left vertex, the top right vertex and the bottom right vertex, can be determined as (10,10), (10, 500), (200, 10) (200,500) according to the detection result of the target object, then, the image area and the clipping rule are combined to determine the boundary of the clipped image or the vertex of the clipped image, and the determined area of the clipped image can be shown as the right side of fig. 5.
45. And determining the display size corresponding to each image to be displayed according to the image quality data of each image to be displayed.
In this embodiment, the display size of each image to be displayed is determined according to the image quality data of the image to be displayed, so that the image to be displayed with higher image quality can be displayed more greatly, and the image to be displayed with lower image quality can be displayed less greatly.
The image quality data is determined according to at least one of the following parameters: the display method comprises the following steps of positioning a main body in an image to be displayed in the image to be displayed, the definition of the image to be displayed, the brightness of the image to be displayed, the vividness of the image to be displayed and the like degree of the image to be displayed. The vividness of the image to be displayed may be referred to as saturation of the image to be displayed, and may also be referred to as purity, chroma, and the like.
The preference degree of the image to be displayed can be evaluated according to the user preference determined by history or the behavior and the action of the user, and the specific evaluation method can refer to the related technology and is not described herein again.
46. And displaying the cut images corresponding to the plurality of images to be displayed in the interface according to the display sizes corresponding to the images to be displayed.
Specifically, if the display area of the user equipment is limited, the display layout of the images to be displayed in the display interface of the user equipment can be determined by combining the size of the display area where the user equipment can display the images and the display size corresponding to each image to be displayed, and the cut images corresponding to the plurality of images to be displayed can be displayed in the interface according to the determined display layout.
For example, referring to fig. 6, the middle part of fig. 6 shows a display interface of a user device displaying an image, where the interface includes a plurality of images, where photo e is a cropped image displayed vertically, the display scale may be 9:16 or 3:4, image photo j is a cropped image displayed horizontally, the display scale may be 16:9 or 4:3, and image photo h is a cropped image in a square. The PhotoE, the photoH and the photoJ correspond to three to-be-displayed images with larger display sizes; PhotoA and the like except PhotoE, photoH and photoJ correspondingly display images to be displayed with smaller sizes.
When the display is performed specifically, the layout can be performed according to the display size corresponding to the image in the order from top to bottom and from left to right of the interface, so as to form a complete interface.
Optionally, in this embodiment of the application, if the image to be displayed includes a plurality of images, the method further includes: classifying the images to be displayed according to the image contents of the images to be displayed and the text information corresponding to the images to be displayed through a preset classification model to obtain multiple types of images to be displayed; when specifically displaying, receiving a classified display operation, and determining a target category indicated by the classified display operation; determining a plurality of images to be displayed belonging to the target category, and displaying the cut images corresponding to the determined plurality of images to be displayed in an interface.
In the embodiment of the application, the image content of the target object can be determined according to the identification result of the target content identification of the image to be displayed; the text information corresponding to the image to be displayed may be specifically image shooting time, shooting position, and the like recorded in a text manner. The specific clustering method can refer to the related art, and is not described herein again.
Specifically, in this embodiment, a CLIP multi-modal model may be used to classify the image to be displayed according to the image to be displayed and the text content corresponding to the image to be displayed.
In addition, for some use scenarios, the scheme provided by the embodiment can perform targeted clustering. For example, the image acquisition device is an intelligent device arranged in a household, the intelligent device can automatically acquire pictures corresponding to the highlights in the household at the moment, and the image acquisition can also be performed according to the instruction of the user. Then, in this embodiment, face clustering may be performed on the family members, so that each family member corresponds to one group of images.
Referring to fig. 6, a presentation interface of a presentation image of the user equipment is shown in the middle of fig. 6, a search box is shown above the interface, the user can input a category of search in the search box, the user equipment can receive the classified presentation operation, and the search category input by the user is determined as a target category indicated by the classified presentation operation.
In addition, in other implementation manners, selectable items corresponding to the categories may also be directly displayed in an interface of the user equipment, the user may select the corresponding categories by triggering the selectable items, and the user equipment may receive the category display operation and determine the categories corresponding to the triggered selectable items as the target categories indicated by the category display operation.
Fig. 6 shows exemplarily at the upper left a classification category that can be input by the user, and specifically may include "dad", "mom", "kid", "other" under the character category; "Hangzhou", "Shenzhen", "Beijing", "other" under the locality category; under category "food", "landscape", "sports", "other", etc.
It should be noted that the present application does not limit the execution sequence of the above steps 45 and steps 41 to 44, and the steps may be executed in parallel or executed in series according to any sequence.
Optionally, in an embodiment of the present application, the method further includes: determining a plurality of candidate images for generating a video; extracting keywords according to the candidate images, generating an onwhite scheme according to the extracted keywords, and generating voice content according to the onwhite scheme; generating a candidate video according to a plurality of candidate images, combining the candidate video and the voice content to generate a target video, selecting a video cover image from the candidate images in the target video, and taking a cut image corresponding to the video cover image as a thumbnail of the video; displaying the thumbnail of the video in an interface, and playing the video after the thumbnail is triggered.
Therefore, the video including the voice-over can be automatically generated according to the image content, the content of the generated video is richer, the thumbnail of the video can be displayed in the interface, and the video corresponding to the thumbnail can be played after the user triggers the thumbnail.
In this embodiment, the candidate image for generating the video may be an image of a certain category obtained by the classification.
Optionally, in this embodiment of the present application, after determining a plurality of candidate images for generating a video, the method further includes: performing image semantic understanding analysis on the candidate images to obtain image content, image composition or image theme corresponding to each candidate image; screening a plurality of candidate images according to preset screening conditions, wherein the preset screening conditions comprise at least one of the following conditions: the image content is dissimilar, the image composition is dissimilar, and the image subject is dissimilar.
Therefore, images with similar image content, similar image composition and similar image theme can be deleted, the diversity of the images in the video is ensured, and the arrangement exquisite degree of the images in the generated video is improved. In addition, the screening condition can also comprise deleting images with poor image quality, and further improve the arrangement exquisite degree of the images in the generated video.
Optionally, in this embodiment of the application, the extracting keywords according to the plurality of candidate images and generating a voice-over document according to the extracted keywords includes: performing semantic analysis on the candidate images to obtain a semantic analysis result; extracting the keywords according to the semantic analysis result and shooting information corresponding to a plurality of candidate images, wherein the keywords comprise at least one of the following: time, weather, mood, items; and generating the voice-over case according to the keywords by a natural language generation technology.
By extracting keywords such as time, weather, mood and articles according to the semantic analysis result and generating the bystander scheme according to the keywords and a natural language generation technology, bystander descriptions corresponding to the keywords can be added to the bystander content, thereby improving the narrative performance of the bystander and further improving the narrative performance of the generated video.
Referring to fig. 7, a flow chart of the steps of a video generation method is shown.
A) The user can select a plurality of images as candidate images, and the selected images can be photos or videos;
B) and screening the candidate images selected by the user, determining the topics of the screened candidate images, and obtaining a plurality of candidate images with different topics.
Specifically, in this embodiment, the images selected by the user may be filtered according to the shooting time of the images, the similarity of the images, the image quality, the beauty degree, and the like.
And then clustering the screened images according to the image content to obtain a plurality of images corresponding to a plurality of subjects. The same image may belong to multiple subjects, and one or more images may be included under each subject.
C) And extracting keywords according to the information of the candidate images to obtain the description elements (namely the extracted keywords).
Specifically, keyword extraction may be performed based on the related information of the candidate image and the result of the voice analysis of the candidate image. The information on the candidate image may be, for example, the photographing time, the photographing place, and the like of the candidate image. The semantic recognition result of the candidate image can be the specific content of the target object in the candidate image, the meaning expressed by the candidate image, and the like.
The extracted description elements may include time, weather, mood, items, etc.
D) And inputting the description elements into a text generation model to obtain the bystander documents corresponding to the candidate images.
Specifically, the text generation model may specifically be an NLG model.
E) And performing voice conversion on the white text by using a TTS technology to obtain segmented voice-over corresponding to the image.
Specifically, the voice over speech generated according to the keyword may be used as the voice over speech corresponding to the candidate image according to the candidate image from which the keyword is extracted.
In particular, a piece of voice-over-speech may correspond to multiple candidate images, for example, candidate images belonging to the same topic.
F) And determining the score corresponding to the voice with voice aside, and combining the score and the voice with voice aside to obtain the audio with the score and voice aside.
Specifically, the corresponding score may be selected according to the rhythm characteristics, description content, length, and the like of the voice-over.
G) And generating a candidate video matched with the voice time length according to the image corresponding to each voice section based on the voice time length corresponding to each voice section.
The generated video may be a video of an image corresponding to the carousel voice-over-speech.
H) And combining the candidate video and the audio to obtain the target video with the side and the score.
The obtained video can be stored as an image log.
In this embodiment, the steps F) and G) may be executed in parallel, or may be executed according to any sequence, which is not limited in this embodiment.
After the target video is generated, a cover image can be determined from a plurality of candidate images for generating the video, the cover image is cut according to the method, the cut image is displayed as a thumbnail of the video, and the video is played after the thumbnail is triggered.
For example, in this embodiment, a plurality of images in the child growth process may be used as candidate images, and according to the above scheme, a video corresponding to the child growth process may be automatically generated by one key, and the video further includes a voice-over tone determined according to the child growth process and an automatically added score.
Another embodiment of the present application provides a system architecture diagram for image display. As shown in fig. 8, it includes: user equipment side and server side.
The user equipment end can upload the image to be displayed to the server album OSS through the APP, and after uploading is completed, the user can trigger through the APP to control the server to start processing the image to be displayed.
Referring to fig. 8, the server may perform cropping, image classification, and image quality analysis on the image to be displayed based on the offline service.
Specifically, the specific implementation manner of the cropping may refer to the specific implementation manner of the cropped image of the image to be displayed in the foregoing embodiment. The off-line service of cropping outputs a cropping scheme for the image to be shown, such as the position of the vertex or edge of the cropping in the image to be shown.
As shown in fig. 8, the image classification may be face clustering or classification through a multi-classification network. The face clustering can be used for images to be displayed including family members, and specifically, the images to be displayed including the same family member can be classified into one type through face recognition. The multi-classification network can specifically classify the images to be displayed into a plurality of classes according to time, place, image content and the like.
The image quality analysis may be specifically used to determine image quality data of an image to be displayed, and the method for specifically determining the image quality may refer to the foregoing embodiments, which are not described herein again.
Referring to fig. 8, the cropping scheme, classification result, image quality data, etc. of the image to be displayed may be input into the algorithm processing result library. Then, the image to be displayed is cut according to the cutting scheme in the algorithm processing result library through AI-Orch, and the cut image is obtained; and the images to be displayed can be sequenced according to the image quality scores of the images to be displayed, and the display size of the images to be displayed is determined according to the sequencing result.
In this embodiment, after receiving the display operation of the user, the user equipment device pulls the plurality of cut images according to the processing result of the AI-Orch, and displays the plurality of cut images according to the display size.
The user equipment device can also pull the image to be displayed which is not cut, and after the selection operation of the user for the displayed cut image is received, the image to be displayed corresponding to the triggered cut image can be amplified and displayed.
Referring to fig. 9A, a flowchart illustrating steps of an image displaying method provided in an embodiment of the present application is shown, and applied to a user equipment, as shown in the figure, the method includes:
s91, responding to the cutting operation of the user, carrying out display direction detection, target object detection and symmetrical detection of image content on the image to be cut, and determining a cutting rule corresponding to the image to be displayed according to the display direction detection result and the symmetrical detection result of the image to be displayed, wherein the cutting rule is used for indicating the display direction of the image after cutting and indicating the symmetrical rule which the image after cutting accords with.
For example, in this embodiment, an image may be captured by a camera and the captured image may be displayed, a "crop" button may be included in an interface for displaying the image, and a user may trigger the crop button, and then the above step S901 may be performed to start a crop operation on the image.
Alternatively, one or more images may be displayed through a mobile phone, a computer, an electronic album, or the like, an interface for displaying the images may include a "cut" button, and a user may trigger the cut button, and then the step S901 may be executed to start a cutting operation on the displayed one or more images.
For a specific detection method, reference may be made to the above embodiments, which are not described herein again.
S92, displaying a cutting preview image, wherein the cutting preview image comprises a cutting identifier corresponding to the cutting rule and a cutting area determined based on the image area where the target object is located and the cutting rule according to the target object detection result.
Optionally, in this embodiment, if the clipping rule includes: taking the symmetry axis as the central axis of the image after cutting or taking the symmetry center as the center of the image after cutting, and the cutting identifier corresponding to the cutting rule comprises: as the axis of symmetry of the central axis, or, as the center of symmetry of the image center.
Referring to the left side of fig. 9B, the cutting preview image may include a cutting area, the remaining area after cutting the cutting area is a reserved area, and the height in the reserved area is the height of the image area where the detected target object "bird" is located. In addition, the cropping preview image may further include a cropping flag corresponding to the cropping rule, and for example, may include a symmetry axis corresponding to a symmetry rule indicating that the cropped image conforms to.
Referring to the right side of fig. 9B, the cropping preview image may include a cropping area, where the cropping area is cropped or the remaining area is a reserved area, and the width of the reserved area is the width of the image area where the target object "bird" is located. In addition, referring to the right side of fig. 9B, the cropping preview image may further include a symmetry axis to which the cropped image fits.
Because the user can directly judge the display direction of the cut image according to the length and the width of the cutting area, the display direction does not need to be marked independently.
And S93, responding to the determined operation of the user, cutting the image to be cut according to the cutting preview image, and displaying the cut image in an interface.
The user can adjust the cropping area, or the user can trigger a determination button such as "√" in the interface to input a determination operation, and after the determination operation is input, the image can be cropped according to the cropping area in the cropping preview image, and the cropped image is displayed.
Referring to fig. 10A, a flowchart illustrating steps of a video generation method provided in an embodiment of the present application is shown, and as shown in the drawing, the method includes:
s101, displaying a plurality of candidate images for generating a video.
Referring to FIG. 10B, a plurality of user-selected candidate images are shown. The candidate image may be an image related to an event, and the generated target video may be a video related to the event; the candidate image may be an image related to a person, and the generated target video may be a character introduction video, a character growth diary, or the like.
S102, responding to the voice-over generation operation of the user, displaying the keywords extracted according to the candidate images, and displaying the voice-over pattern generated according to the keywords.
The display interface of the candidate images can comprise a 'generation' button, and a user can trigger the generation button to start displaying the extracted keywords of the candidate images and display the voice-over copy generated according to the keywords.
The user can adjust the keywords and can generate the voice-over case again according to the adjusted keywords; alternatively, the user may edit the generated voice-over copy directly.
S103, in response to the video generation operation of the user, combining the voice content generated based on the voice-over-white scheme and the candidate video generated based on the candidate image to generate a target video, and displaying a video cover image selected from a plurality of candidate images.
The interface for displaying the keywords and the voice-over case can comprise a 'confirm' button, and a user can start to generate the target video after triggering the 'confirm' button and select the video cover image from a plurality of candidate images for displaying.
And S104, displaying the thumbnail obtained by cutting the video cover image in an interface, and playing the target video after the thumbnail is triggered.
In this embodiment, the method for obtaining the thumbnail by clipping the cover image may refer to the image clipping scheme in the above embodiment, and is not described herein again.
Referring to fig. 11, a schematic structural diagram of an electronic device according to an embodiment of the present application is shown, and the specific embodiment of the present application does not limit a specific implementation of the electronic device.
As shown in fig. 11, the electronic device may include: a processor (processor)1102, a communication Interface 1104, a memory 1106, and a communication bus 1108.
Wherein:
the processor 1102, communication interface 1104, and memory 1106 communicate with one another via a communication bus 1108.
A communication interface 1104 for communicating with other electronic devices or servers.
The processor 1102 is configured to execute the program 1110, and may specifically execute relevant steps in the foregoing check code generation method embodiment.
In particular, the program 1110 can include program code that includes computer operating instructions.
The processor 1102 may be a processor CPU, or an application Specific Integrated circuit (asic), or one or more Integrated circuits configured to implement embodiments of the present application. The intelligent device comprises one or more processors which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
A memory 1106 for storing a program 1110. Memory 1106 may comprise high-speed RAM memory and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 1110 may be specifically adapted to cause the processor 1102 to perform the methods described in any of the preceding method embodiments.
For specific implementation of each step in the program 1110, reference may be made to corresponding steps and corresponding descriptions in units in the foregoing embodiments of the image display method, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
Embodiments of the present application further provide a computer program product, which includes computer instructions for instructing a computing device to perform operations corresponding to any of the above method embodiments.
It should be noted that, according to the implementation requirement, each component/step described in the embodiment of the present application may be divided into more components/steps, and two or more components/steps or partial operations of the components/steps may also be combined into a new component/step to achieve the purpose of the embodiment of the present application.
The above-described methods according to embodiments of the present application may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, a RAM, a floppy disk, a hard disk, or a magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium downloaded through a network and to be stored in a local recording medium, so that the methods described herein may be stored in such software processes on a recording medium using a general-purpose computer, a dedicated processor, or programmable or dedicated hardware such as an ASIC or FPGA. It will be appreciated that a computer, processor, microprocessor controller, or programmable hardware includes memory components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by a computer, processor, or hardware, implements the methods described herein. Further, when a general-purpose computer accesses code for implementing the methods illustrated herein, execution of the code transforms the general-purpose computer into a special-purpose computer for performing the methods illustrated herein.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The above embodiments are only used for illustrating the embodiments of the present application, and not for limiting the embodiments of the present application, and those skilled in the relevant art can make various changes and modifications without departing from the spirit and scope of the embodiments of the present application, so that all equivalent technical solutions also belong to the scope of the embodiments of the present application, and the scope of patent protection of the embodiments of the present application should be defined by the claims.

Claims (14)

1. An image presentation method comprising:
carrying out display direction detection, target object detection and image content symmetry detection on an image to be displayed;
determining a cutting rule corresponding to the image to be displayed according to the display direction detection result and the symmetry detection result of the image to be displayed, wherein the cutting rule is used for indicating the display direction of the cut image and indicating the symmetry rule which the cut image conforms to;
and determining an image area where a target object is located according to the target object detection result of the image to be displayed, cutting the image to be displayed according to the cutting rule based on the image area, obtaining a cut image, and displaying the cut image.
2. The method according to claim 1, wherein the determining, according to the display direction detection result and the symmetry detection result of the image to be displayed, the cropping rule corresponding to the image to be displayed comprises:
determining a display direction matched with a target object in the image to be displayed according to a target object detection result of the image to be displayed;
and determining a cutting rule for indicating the display direction of the cut image according to the display direction adapted to the target object and the display direction detection result of the image to be displayed, and determining a cutting rule for indicating the symmetrical rule which the cut image accords with according to the symmetrical detection result.
3. The method of claim 1, wherein if the image to be displayed comprises a plurality of images, the method further comprises:
determining the display size corresponding to each image to be displayed according to the image quality data of each image to be displayed, wherein the image quality data is determined according to at least one of the following parameters: the position of a main body in the image to be displayed, the definition of the image to be displayed, the brightness of the image to be displayed and the like degree of the image to be displayed;
the displaying the cropped image includes:
and displaying the cut images corresponding to the plurality of images to be displayed in the interface according to the display sizes corresponding to the images to be displayed.
4. The method according to claim 1, wherein the symmetry detection of the image content of the image to be represented is achieved by:
horizontally overturning an image to be displayed to obtain a first overturned image, calculating a first similarity between the first overturned image and the image to be displayed, and determining a vertical symmetry axis of image content of the image to be displayed according to the first similarity; alternatively, the first and second electrodes may be,
and vertically overturning the image to be displayed to obtain a second overturned image, calculating a second similarity between the second overturned image and the image to be displayed, and determining a horizontal symmetry axis of the image content of the image to be displayed according to the first similarity.
5. The method of claim 4, wherein the clipping rule comprises: and taking the symmetry axis as a central axis of the image after cutting or taking the symmetry center as the center of the image after cutting.
6. The method of claim 1, wherein the method further comprises:
determining a plurality of candidate images for generating a video;
extracting keywords according to the candidate images, generating an onwhite scheme according to the extracted keywords, and generating voice content according to the onwhite scheme;
generating a candidate video according to a plurality of candidate images, combining the candidate video and the voice content to generate a target video, selecting a video cover image from the candidate images in the target video, and taking a cut image corresponding to the video cover image as a thumbnail of the video;
displaying the thumbnail of the video in an interface, and playing the target video after the thumbnail is triggered.
7. The method of claim 6, wherein after determining the plurality of candidate images for generating the video, the method further comprises:
performing image semantic understanding analysis on the candidate images to obtain image content, image composition or image theme corresponding to each candidate image;
screening a plurality of candidate images according to preset screening conditions, wherein the preset screening conditions comprise at least one of the following conditions: the image content is similar, the image composition is similar, and the image theme is similar.
8. The method of claim 7, wherein said extracting keywords from said plurality of candidate images and generating a voice-over copy from the extracted keywords comprises:
performing semantic analysis on the candidate images to obtain a semantic analysis result;
extracting the keywords according to the semantic analysis result and shooting information corresponding to a plurality of candidate images, wherein the keywords comprise at least one of the following: time, weather, mood, items;
and generating the voice-over case according to the keywords by a natural language generation technology.
9. An image presentation method comprising:
responding to the cutting operation of a user, carrying out display direction detection, target object detection and symmetrical detection of image content on an image to be cut, and determining a cutting rule corresponding to the image to be displayed according to a display direction detection result and a symmetrical detection result of the image to be displayed, wherein the cutting rule is used for indicating the display direction of the image to be cut and indicating the symmetrical rule which the image to be cut conforms to;
displaying a cutting preview, wherein the cutting preview comprises a cutting identifier corresponding to the cutting rule, an image area where a target object is located and a cutting area determined by the cutting rule, and the image area is determined based on a target object detection result;
and responding to the determined operation of the user, cutting the image to be cut according to the cutting preview image, and displaying the cut image in an interface.
10. The method of claim 9, wherein if the clipping rule comprises: taking the symmetry axis as the central axis of the image after cutting or taking the symmetry center as the center of the image after cutting, and the cutting identifier corresponding to the cutting rule comprises: as the axis of symmetry of the central axis, or, as the center of symmetry of the image center.
11. The method of claim 9, wherein the method further comprises:
presenting a plurality of candidate images for generating a video;
responding to the voice-over generation operation of a user, displaying keywords extracted according to a plurality of candidate images, and displaying voice-over patterns generated according to the keywords;
in response to a video generation operation of a user, combining the voice content generated based on the voice-over pattern and the candidate video generated based on the candidate image to generate a target video, and displaying a video cover image selected from a plurality of candidate images;
and displaying the thumbnail obtained by cutting the video cover image in an interface so as to play the target video after the thumbnail is triggered.
12. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the corresponding operation of the method according to any one of claims 1-11.
13. A computer storage medium having stored thereon a computer program which, when executed by a processor, carries out the method of any one of claims 1 to 11.
14. A computer program product comprising computer instructions for instructing a computing device to perform operations corresponding to the method of any one of claims 1 to 11.
CN202111430152.1A 2021-11-29 2021-11-29 Image display method, electronic device and storage medium Active CN114092495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111430152.1A CN114092495B (en) 2021-11-29 2021-11-29 Image display method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111430152.1A CN114092495B (en) 2021-11-29 2021-11-29 Image display method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN114092495A true CN114092495A (en) 2022-02-25
CN114092495B CN114092495B (en) 2023-01-31

Family

ID=80305315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111430152.1A Active CN114092495B (en) 2021-11-29 2021-11-29 Image display method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114092495B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115108117A (en) * 2022-05-26 2022-09-27 盈合(深圳)机器人与自动化科技有限公司 Cutting method, system, terminal and computer storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106920141A (en) * 2015-12-28 2017-07-04 阿里巴巴集团控股有限公司 Page presentation content processing method and device
CN111159447A (en) * 2019-12-27 2020-05-15 海南简族信息技术有限公司 Picture display method, device and equipment and computer readable storage medium
CN111696112A (en) * 2020-06-15 2020-09-22 携程计算机技术(上海)有限公司 Automatic image cutting method and system, electronic equipment and storage medium
CN111754407A (en) * 2020-06-27 2020-10-09 北京百度网讯科技有限公司 Layout method, device and equipment for image display and storage medium
CN112052839A (en) * 2020-10-10 2020-12-08 腾讯科技(深圳)有限公司 Image data processing method, apparatus, device and medium
CN112132836A (en) * 2020-08-14 2020-12-25 咪咕文化科技有限公司 Video image clipping method and device, electronic equipment and storage medium
CN113126942A (en) * 2021-03-19 2021-07-16 北京城市网邻信息技术有限公司 Display method and device of cover picture, electronic equipment and storage medium
CN113297514A (en) * 2020-04-13 2021-08-24 阿里巴巴集团控股有限公司 Image processing method, image processing device, electronic equipment and computer storage medium
CN113573129A (en) * 2021-06-11 2021-10-29 阿里巴巴(中国)有限公司 Commodity object display video processing method and device
CN113709386A (en) * 2021-03-19 2021-11-26 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and computer readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106920141A (en) * 2015-12-28 2017-07-04 阿里巴巴集团控股有限公司 Page presentation content processing method and device
CN111159447A (en) * 2019-12-27 2020-05-15 海南简族信息技术有限公司 Picture display method, device and equipment and computer readable storage medium
CN113297514A (en) * 2020-04-13 2021-08-24 阿里巴巴集团控股有限公司 Image processing method, image processing device, electronic equipment and computer storage medium
CN111696112A (en) * 2020-06-15 2020-09-22 携程计算机技术(上海)有限公司 Automatic image cutting method and system, electronic equipment and storage medium
CN111754407A (en) * 2020-06-27 2020-10-09 北京百度网讯科技有限公司 Layout method, device and equipment for image display and storage medium
CN112132836A (en) * 2020-08-14 2020-12-25 咪咕文化科技有限公司 Video image clipping method and device, electronic equipment and storage medium
CN112052839A (en) * 2020-10-10 2020-12-08 腾讯科技(深圳)有限公司 Image data processing method, apparatus, device and medium
CN113126942A (en) * 2021-03-19 2021-07-16 北京城市网邻信息技术有限公司 Display method and device of cover picture, electronic equipment and storage medium
CN113709386A (en) * 2021-03-19 2021-11-26 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and computer readable storage medium
CN113573129A (en) * 2021-06-11 2021-10-29 阿里巴巴(中国)有限公司 Commodity object display video processing method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115108117A (en) * 2022-05-26 2022-09-27 盈合(深圳)机器人与自动化科技有限公司 Cutting method, system, terminal and computer storage medium
CN115108117B (en) * 2022-05-26 2023-06-27 盈合(深圳)机器人与自动化科技有限公司 Cutting method, cutting system, terminal and computer storage medium

Also Published As

Publication number Publication date
CN114092495B (en) 2023-01-31

Similar Documents

Publication Publication Date Title
US11483268B2 (en) Content navigation with automated curation
US8548249B2 (en) Information processing apparatus, information processing method, and program
US11317139B2 (en) Control method and apparatus
US7636450B1 (en) Displaying detected objects to indicate grouping
US8532347B2 (en) Generation and usage of attractiveness scores
US8259995B1 (en) Designating a tag icon
US7813526B1 (en) Normalizing detected objects
US7694885B1 (en) Indicating a tag with visual data
CN103988202A (en) Image attractiveness based indexing and searching
CN105684046B (en) Generate image composition
JP5890325B2 (en) Image data processing apparatus, method, program, and integrated circuit
CN111359201A (en) Jigsaw puzzle type game method, system and equipment
CN111586466A (en) Video data processing method and device and storage medium
US20230336671A1 (en) Imaging apparatus
CN114092495B (en) Image display method, electronic device and storage medium
US9117275B2 (en) Content processing device, integrated circuit, method, and program
CN110049180A (en) Shoot posture method for pushing and device, intelligent terminal
US11222208B2 (en) Portrait image evaluation based on aesthetics
CN106649710A (en) Picture pushing method, device and mobile terminal
CN107656760A (en) Data processing method and device, electronic equipment
CN110633377A (en) Picture cleaning method and device
JP2003330941A (en) Similar image sorting apparatus
CN111582281B (en) Picture display optimization method and device, electronic equipment and storage medium
US11461576B2 (en) Information processing method and related electronic device
JP2013171335A (en) Image processing system and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 554, 5 / F, building 3, 969 Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: Room 508, 5 / F, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: Alibaba (China) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant