CN111445439B - Image analysis method, device, electronic equipment and medium - Google Patents

Image analysis method, device, electronic equipment and medium Download PDF

Info

Publication number
CN111445439B
CN111445439B CN202010121619.3A CN202010121619A CN111445439B CN 111445439 B CN111445439 B CN 111445439B CN 202010121619 A CN202010121619 A CN 202010121619A CN 111445439 B CN111445439 B CN 111445439B
Authority
CN
China
Prior art keywords
image
target image
analysis result
feature
background area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010121619.3A
Other languages
Chinese (zh)
Other versions
CN111445439A (en
Inventor
谢文珍
黄恺
冯富森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Future Vipkid Ltd
Original Assignee
Future Vipkid Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Future Vipkid Ltd filed Critical Future Vipkid Ltd
Priority to CN202010121619.3A priority Critical patent/CN111445439B/en
Publication of CN111445439A publication Critical patent/CN111445439A/en
Application granted granted Critical
Publication of CN111445439B publication Critical patent/CN111445439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image analysis method, an image analysis device, electronic equipment and a medium. In the application, at least one target image can be selected based on the face recognition result of at least one original image, the target image comprises a human body area, the target image is subjected to semantic segmentation, the background area in the target image is extracted, the first feature corresponding to the background area is extracted, and the first analysis result of the background area is calculated based on the first feature. By applying the technical scheme of the application, at least one target image containing a human body area can be selected from a plurality of original images, the target image is subjected to semantic segmentation, the first features corresponding to the background area in the target image are extracted, and the first features are analyzed to obtain a first analysis result, so that the accuracy and the practicability of image background evaluation in the related technology are improved.

Description

Image analysis method, device, electronic equipment and medium
Technical Field
The present invention relates to image processing technologies, and in particular, to an image analysis method, an image analysis device, an electronic device, and a medium.
Background
With the development of the Internet, online education is popular with more and more people, and online education scientific research is not limited in time and place and flexible in learning, so that learners can fully improve own skills. Compared with the traditional fixed classroom, the movable classroom has the advantages of moving and facilitating, and has more visual and attractive images and audios. In the related art, the background image of the online classroom may be evaluated and analyzed by randomly selecting video images and adding a manual screening method or detecting the background image of the online classroom by a video frame-by-frame detection full-image method. However, the inventors found that when the background image of the online classroom is evaluated and analyzed by the above-described technique, there are problems of inaccuracy and low practicality in evaluating the background image.
Disclosure of Invention
The embodiment of the application provides an image analysis method, an image analysis device, electronic equipment and a medium.
According to an aspect of an embodiment of the present application, there is provided an image analysis method, including:
Selecting at least one target image based on the face recognition result of the at least one original image, wherein the target image comprises a human body area;
carrying out semantic segmentation on the target image, and extracting a background area in the target image;
And extracting a first characteristic corresponding to the background area, and calculating a first analysis result of the background area based on the first characteristic.
Optionally, in another embodiment of the above method according to the present application, the method further comprises:
identifying at least one object in the target image, and determining a second analysis result of the object based on the identification result;
a third analysis result of the target image is determined based on the first analysis result and the second analysis result.
Optionally, in another embodiment of the above method according to the present application, the identifying at least one object in the target image, and determining the second analysis result of the object based on the identification result includes:
Extracting at least one object region in the target image;
Extracting a second feature corresponding to the object region, and determining type information and/or attribute information of the object based on the second feature;
The second analysis result is determined based on the type information and/or attribute information.
Optionally, in another embodiment of the above method according to the present application, the selecting at least one target image based on the face recognition result of the at least one original image includes:
Detecting whether the original image contains a face image or not by using a preset face recognition model;
when the original image is determined to contain a face image, acquiring the size ratio of the face image to the corresponding original image and the position of the face image in the original image;
and screening an original image meeting preset conditions as the target image based on the size duty ratio and the position.
Optionally, in another embodiment of the above method according to the present application, the calculating the first analysis result of the background area based on the first feature includes:
based on the first feature, determining the color type and the color number corresponding to the background area;
the first analysis result is determined based on the color category and the number of colors.
Optionally, in another embodiment of the above method according to the present application, the calculating the first analysis result of the background area based on the first feature includes:
Calculating the matching degree of the first feature and a third feature corresponding to the human body region;
And determining the first analysis result based on the matching degree.
According to another aspect of an embodiment of the present application, there is provided an apparatus for selecting an image, including:
the selection module is used for selecting at least one target image based on the face recognition result of the at least one original image, wherein the target image comprises a human body area;
The extraction module is used for carrying out semantic segmentation on the target image and extracting a background area in the target image;
And the calculating module is used for extracting first features corresponding to the background area and calculating a first analysis result of the background area based on the first features.
According to still another aspect of an embodiment of the present application, there is provided an electronic apparatus including:
a memory for storing executable instructions; and
And the display is used for displaying with the memory to execute the executable instructions so as to finish the operation of any image analysis method.
According to still another aspect of the embodiments of the present application, there is provided a computer-readable storage medium storing computer-readable instructions that, when executed, perform the operations of any one of the above-described image analysis methods.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
When the scheme of the embodiment of the application is executed, face recognition is carried out on an original image, at least one target image is selected based on the face recognition result of at least one original image, the target image comprises a human body area, semantic segmentation is carried out on the target image, namely, the foreground area and the background area of the target image are separated, the background area in the target image is extracted, then the first feature corresponding to the background area is extracted, and the first analysis result corresponding to the background area is calculated based on the first feature. According to the application, at least one target image can be selected from a plurality of original images, the background area is extracted from the target image, and the first characteristic corresponding to the background area is calculated to obtain the first analysis result, so that the accuracy and the practicability of evaluating the image background in the related technology are improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other drawings may be obtained from them without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a system architecture of an image analysis method according to the present application;
FIG. 2 is a schematic flow chart of an image analysis method according to the present application;
FIG. 3 is a schematic flow chart of an image analysis method according to the present application;
FIG. 4 is a schematic flow chart of an image analysis method according to the present application;
FIG. 5 is a schematic diagram of an image analysis device according to the present application;
Fig. 6 is a schematic diagram showing the structure of an electronic device according to the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present application unless it is specifically stated otherwise.
Also, it should be understood that, for convenience of description, the dimensions of the various parts shown in the drawings are not drawn to scale.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
In addition, the technical solutions of the embodiments of the present application may be combined with each other, but it is necessary to be based on the fact that those skilled in the art can implement the technical solutions, and when the technical solutions are contradictory or cannot be implemented, the combination of the technical solutions should be considered as not existing, and not falling within the scope of protection claimed by the present application.
It should be noted that, in the embodiments of the present application, all directional indicators (such as up, down, left, right, front, and rear … …) are merely used to explain the relative positional relationship, movement conditions, and the like between the components in a specific gesture (as shown in the drawings), and if the specific gesture changes, the directional indicators correspondingly change.
A method for performing image analysis according to an exemplary embodiment of the present application is described below with reference to fig. 1 to 3. It should be noted that the following application scenarios are only shown for facilitating understanding of the spirit and principles of the present application, and embodiments of the present application are not limited in this respect. Rather, embodiments of the application may be applied to any scenario where applicable.
Fig. 1 shows a schematic diagram of an exemplary system architecture 100 to which an image analysis method or image analysis apparatus of an embodiment of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, the server 105 may be a server cluster formed by a plurality of servers.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices with display screens including, but not limited to, smartphones, tablet computers, portable computers, desktop computers, and the like.
The terminal apparatuses 101, 102, 103 in the present application may be terminal apparatuses providing various services. For example, the user selects at least one target image based on the face recognition result of at least one original image by the terminal device 103 (or the terminal device 101 or 102), wherein the target image contains a human body region; carrying out semantic segmentation on the target image, and extracting a background area in the target image; and extracting a first characteristic corresponding to the background area, and calculating a first analysis result of the background area based on the first characteristic.
It should be noted that, the image analysis method provided in the embodiment of the present application may be executed by one or more of the terminal devices 101, 102, 103 and/or the server 105, and accordingly, the image analysis apparatus provided in the embodiment of the present application is generally disposed in the corresponding terminal device and/or the server 105, but the present application is not limited thereto.
The application also provides an image analysis method, an image analysis device, a target terminal and a medium.
Fig. 2 schematically shows a flow diagram of an image analysis method according to an embodiment of the application. As shown in fig. 2, the method includes:
S201, selecting at least one target image based on the face recognition result of at least one original image, wherein the target image comprises a human body area.
In the present application, the device for acquiring the original image is not specifically limited, and may be, for example, an intelligent device or a server. The smart device may be a PC (Personal Computer ), a smart phone, a tablet computer, an electronic book reader, an MP3 (Moving Picture Experts Group AudioLayer III, a moving picture experts compression standard audio layer 3) image selecting device, an MP4 (Moving Picture ExpertsGroup Audio Layer IV, a moving picture experts compression standard audio layer 4) image selecting device, a portable computer, or a mobile terminal device with a display function.
Optionally, the original image is not specifically limited in the present application, that is, the original image in the present application may be any image information. In a preferred embodiment, the original image may be an image containing a region of the human body, or the like.
The number of the original images is not particularly limited, and may be, for example, one or ten.
Furthermore, after the original images are acquired, in order to ensure that the image quality of the finally selected target image meets the requirements, the application can firstly detect the corresponding definition parameters and brightness parameters of each original image.
It will be appreciated that the present application may calculate the sharpness values for each image based on its corresponding pixel neighborhood parameters. The pixel neighborhood parameter in the present application may be a parameter reflecting the pixel neighborhood difference value of the image. It will be appreciated that the higher the pixel neighborhood difference value for an image, the higher the sharpness it reflects. Therefore, the method and the device can generate the corresponding threshold based on the pixel neighborhood parameters so as to remove the original image with the image definition lower than the preset threshold, and further take the rest original image which accords with the definition standard as a candidate image of the determination target image.
In addition, in order to ensure the definition of the target image selected from the original images, the application can detect the brightness parameter of each image to be selected, and further determine the image brightness of each original image through the brightness parameter.
The brightness of the image is the brightness of the picture, and the unit is candela per square meter (cd/m 2). The image brightness is a perceived continuum from a white surface to a black surface, and is determined by the reflectance. In addition, brightness refers to the degree of darkness of light rays impinging on a scene or image. When the brightness of the image increases, the image looks glaring or dazzling, and when the brightness is smaller, the image looks dark.
Further, after the above processing is performed on the original image, a to-be-processed original image is obtained, the to-be-processed original image is identified by using a face recognition technology, and at least one target image is selected from the to-be-processed original image according to the identification result of the face recognition technology, wherein the target image comprises a human body area.
S202, carrying out semantic segmentation on the target image, and extracting a background area in the target image.
The semantic segmentation refers to dividing all pixel points in an image into corresponding categories, and realizing classification of pixel levels. As for extracting the background area of the target image, it may be realized using a person image segmentation technique. The character image segmentation is to separate the foreground and the background of a character photo, and the aim is to classify each pixel of the input picture: foreground and background, a classification map is acquired at the pixel level. Specifically, the background region may be extracted according to an image segmentation algorithm, such as a threshold segmentation algorithm, an edge-based segmentation algorithm, a region dilation algorithm, a watershed algorithm, etc., which are only applicable when the image is relatively simple, and it is difficult to obtain an ideal segmentation result for an image with complex color. Still further, the above-described techniques are directed to enabling generic image semantic segmentation for FCNs and their derived methods such as SegNet and DeepLab to achieve high-precision character image segmentation.
Furthermore, the application can divide the original image by using the preset dividing condition, and then adjust the dividing threshold value according to the gray value and the number of the pixels of the image, thereby determining the background area in the target image and further realizing the automatic extraction of the background area.
S203, extracting a first feature corresponding to the background area, and calculating a first analysis result of the background area based on the first feature.
The first characteristic comprises image parameters such as color parameters, definition parameters and the like. The first analysis result is used for judging whether the original image corresponding to the background area meets the target image.
Taking the online education industry as an example, the primary interaction of a teacher and a student often starts with an avatar photo of a party user. For example, a standard-definition picture of a teacher often attracts many students about lessons, but at present, the teacher often has many problems in uploading pictures. Including image quality, image layout, image background, etc. Furthermore, in terms of image quality, there are often problems of too small photo, insufficient resolution, and irregular photo shape; in terms of image layout, the problems of overlarge and undersize faces, non-centering of photo characters, non-positive faces and the like may exist. And aiming at the image background, the problems that the background is too simple, the picture background is too chaotic, the person holds the pet and the like exist. In addition, there is a problem that the emotion of the person in the image is too serious, and the like.
Furthermore, the existing auditing scheme machine at the present stage has the defects of simple auditing, manual auditing and three schemes of combining the simple auditing and the manual auditing. For example, for machine simple auditing, the pictures are initially screened using simple image recognition dimensions, such as picture size, resolution, sharpness, etc., metrics. And manual auditing refers to the fact that pictures are judged and scored manually. The combination of machine and manual auditing means that the machine is utilized to give approximate scores, and the dimension detection of the pictures with abnormal score comparison given by the machine is manually checked.
In addition, for the online education field, when a teacher performs teaching activities, it is necessary to arrange a warm and comfortable teaching environment, but it is an index which is difficult to quantify and is difficult to detect and evaluate whether the arrangement of the classroom is proper and comfortable. Therefore, how to comprehensively select an image with excellent background characters from a plurality of original images. Which is a problem to be solved by the person skilled in the art.
It should be noted that, after the background area corresponding to the original image is obtained, the first analysis result of the background area may be calculated based on the first features of each background area, such as the color parameter and the sharpness parameter.
When the scheme of the embodiment of the application is executed, face recognition is carried out on an original image, at least one target image is selected based on the face recognition result of at least one original image, the target image comprises a human body area, semantic segmentation is carried out on the target image, namely, the foreground area and the background area of the target image are separated, the background area in the target image is extracted, then the first feature corresponding to the background area is extracted, and the first analysis result corresponding to the background area is calculated based on the first feature. According to the application, at least one target image can be selected from a plurality of original images, the background area is extracted from the target image, and the first characteristic corresponding to the background area is calculated to obtain the first analysis result, so that the accuracy and the practicability of evaluating the image background in the related technology are improved.
Fig. 3 schematically shows a flow diagram of an image analysis method according to an embodiment of the application. As shown in fig. 3, the method includes:
s301, detecting whether the original image contains a face image or not by using a preset face recognition model.
S302, when the original image is determined to contain the face image, the size ratio of the face image to the corresponding original image and the position of the face image in the original image are obtained.
It can be appreciated that the present application can further define whether the original image contains the face image of the user. For example, in the case of head portrait pictures, the application can limit the removal of original images with non-one face image in each original image.
S303, screening an original image meeting preset conditions as a target image based on the size duty ratio and the position.
Further, in order to determine whether the original image is an original image meeting a preset condition, the application can determine the target image from the original images according to the size proportion of the face image corresponding to each original image to the corresponding original image and the position of the face image in the original image.
It will be appreciated that for the user's image, the criteria for its selection may be determined from whether it clearly reveals the face at the time the image was taken, and whether the face position is located in the central region of the image. Therefore, the application can further acquire the face image corresponding to the user and determine whether the corresponding image meets the requirements according to the size of the face image.
The application can determine whether each original image is a target image according to whether the size proportion is in a preset standard interval. The standard interval is not particularly limited, and may be, for example, a range of 70% to 80%, a range of 75% to 85%, or the like.
S304, carrying out semantic segmentation on the target image, and extracting a background area in the target image.
See S202 in fig. 2, and will not be described again.
S305, extracting first features corresponding to the background area, and determining the color types and the color numbers corresponding to the background area based on the first features.
S306, determining a first analysis result based on the color types and the color number.
The first characteristic is a first color parameter, and the color type and the color number corresponding to each background area are determined according to the first color parameter. Further, for an original image including a face image, too many color components in the background area, too bright or too dark color components may affect the aesthetic appearance of the image. Therefore, the present application can determine the first analysis result, i.e., whether the background area satisfies the condition of the background area of the target image, based on the color type and the number of colors corresponding to each background area.
Further, the present application may detect, when color type and number information included in a background area are detected, whether the number of colors included in the background exceeds a predetermined number, and whether a predetermined color type is included in the color type. And determining a first analysis result, namely whether the original image corresponding to the background area meets the standard of the target image or not according to the detection result.
The present application is not limited to the type of color and the number of colors in the background area, and may, for example, determine that the corresponding background area does not conform to the background area of the target image after detecting the type of color including white and black in the background area. Or after detecting that the number of colors of the background area exceeds 3 colors, judging that the first analysis result is that the original image corresponding to the background area does not accord with the standard of the target image.
S307, at least one object area in the target image is extracted.
And S308, extracting second features corresponding to the object regions, and determining type information and/or attribute information of the object based on the second features.
S309, determining a second analysis result based on the type information and/or the attribute information.
S310, determining a third analysis result of the target image based on the first analysis result and the second analysis result.
The second feature is object parameters, the object parameters corresponding to the object regions in each background region are extracted based on a preset neural network image detection model, and the type information and/or attribute information of the object are determined based on the object parameters.
In one possible implementation, the type information of each object is determined based on the object parameters, and the second analysis result is determined based on the type information of each object.
Furthermore, the application can extract the object characteristic parameters corresponding to the object area based on the preset neural network image detection model, and determine whether the object is contained or not. It can be understood that after determining that the object region is included, the second analysis result, that is, whether the original image corresponding to the object region corresponds to the target image, is determined by the type information of the object.
Furthermore, the neural network image detection model in the application can be a convolutional neural network. Among them, convolutional neural networks (Convolutional Neural Networks, CNN) are a type of feedforward neural network (Feedforward Neural Networks) that contains convolutional calculation and has a deep structure, and are one of representative algorithms for deep learning. The convolutional neural network has characteristic learning (representation learning) capability and can carry out translation invariant classification on input information according to a hierarchical structure of the convolutional neural network. Thanks to the strong characteristic characterization capability of CNN (convolutional neural network) on images, the CNN has remarkable effects in the fields of image classification, target detection, semantic segmentation and the like.
Before the object characteristic parameters corresponding to the object region are extracted by using the neural network image detection model, the organ detection network architecture can be defined by adopting the deep convolution neural network based on the cascade region suggestion network, the region regression network and the key point regression network structure. In the adopted deep convolutional neural network, the input of the regional suggestion network is 16 x 3 image data, the network is composed of a full convolutional architecture, and the confidence coefficient and the rough vertex position of the regional suggestion frame of the object are output; the regional regression network inputs 32 x 3 image data, the network is composed of a convolution and a full-connection architecture, and the output is the confidence level and the accurate vertex position of the background region; the key point regression network inputs 64 x 3 image data, the network is composed of a convolution and a full-connection framework, and the output is the confidence level and the position of the object shape information.
In the present application, the manner of determining the second analysis result based on the type information of each object image is not particularly limited. For example, the present application may classify the respective object correspondence into teaching-type objects and non-teaching-type objects. It will be appreciated that for teaching objects, books, blackboards, tables, chairs, computers, etc. may be included. And for non-teaching objects, may include bowls and chopsticks, gaming machines, cups, and the like.
Further, when the object region is detected to include the teaching object, the method can determine that the corresponding background region accords with the background region of the target image as a second analysis result. And when the object region is detected to comprise a non-teaching object, determining that the original image corresponding to the object region does not accord with the standard of the target image as a second analysis result.
In one possible embodiment, attribute information of each object is determined based on the object parameters, the attribute information includes at least one of color information, quantity information, and size information, and based on the type information and the attribute information of each object, a second analysis result, that is, whether an original image corresponding to the object region meets a standard of a target image is determined.
Furthermore, the application can extract the object characteristic parameters of each object area based on a preset neural network image detection model and determine whether the object is contained or not. It can be understood that after determining that the object is included, the second analysis result, that is, the original image corresponding to the object area meets the standard of the target image, is determined by the attribute information of the object.
In the present application, the manner of determining the second analysis result based on the attribute information of each object is not particularly limited. The attribute information may be information reflecting the color, number, and size of the object. It will be appreciated that for an original image containing a human body region, too many color components in the background region, too many objects being placed, or too many objects being present may affect the aesthetics of the image. Therefore, the application can comprehensively determine the second analysis result based on the three parameters, namely, the original image corresponding to the object area accords with the standard of the target image.
Further, the present application may detect whether the color composition of the object exceeds the first number, whether the number of the object exceeds the second number, and whether the size thereof exceeds a preset ratio of the corresponding background image when at least one of the color information, the number information, and the size information is included in the object region. And if the corresponding condition is met, determining a second analysis result, namely that the original image corresponding to the object area meets the standard of the target image.
Further, based on the first analysis result and the second analysis result, that is, according to the background area and the object area of the original image, a third analysis result, that is, whether the original image meets the standard of the target image is obtained.
When the scheme of the embodiment of the application is executed, face recognition is carried out on an original image, at least one target image is selected based on the face recognition result of at least one original image, the target image comprises a human body area, semantic segmentation is carried out on the target image, namely, the foreground area and the background area of the target image are separated, the background area in the target image is extracted, then the first feature corresponding to the background area is extracted, and the first analysis result corresponding to the background area is calculated based on the first feature. According to the application, at least one target image can be selected from a plurality of original images, the background area is extracted from the target image, and the first characteristic corresponding to the background area is calculated to obtain the first analysis result, so that the accuracy and the practicability of evaluating the image background in the related technology are improved.
Fig. 4 schematically shows a flow diagram of an image analysis method according to an embodiment of the application. As shown in fig. 4, the method includes:
S401, detecting whether the original image contains a face image or not by using a preset face recognition model.
S402, when the original image is determined to contain the face image, the size ratio of the face image to the corresponding original image and the position of the face image in the original image are obtained.
S403, screening an original image meeting preset conditions as a target image based on the size duty ratio and the position.
S404, carrying out semantic segmentation on the target image, and extracting a background area in the target image.
Generally, S401 to S404 refer to 301 to S304 in fig. 3, and are not described herein.
S405, extracting a first feature corresponding to a background area, and calculating the matching degree of the first feature and a third feature corresponding to the human body area.
S406, determining the first analysis result based on the matching degree.
The third feature is a second color parameter, and the color type and the color number corresponding to the human body area are determined according to the second color parameter. Further, in order to ensure the beauty of the target image, a second color parameter corresponding to the human body area in the original image may be further determined. And determining a first analysis result based on the color matching degree of the color of the human body area and the color of the background image, wherein the first analysis result is used for determining whether the original image corresponding to the background area meets the standard of the target image.
It can be understood that when the user wears more gorgeous clothes, the corresponding background image should also correspond to a background with rich color information. When the user wears the naive clothes, the corresponding background image should also be a background with simpler color information.
S407, extracting at least one object area in the target image.
And S408, extracting second characteristics corresponding to the object area, and determining type information and/or attribute information of the object based on the second characteristics.
S409, determining a second analysis result based on the type information and/or the attribute information.
S410, determining a third analysis result of the target image based on the first analysis result and the second analysis result.
Generally, S47 to S410 refer to 307 to S310 in fig. 3, and are not described herein.
When the scheme of the embodiment of the application is executed, face recognition is carried out on an original image, at least one target image is selected based on the face recognition result of at least one original image, the target image comprises a human body area, semantic segmentation is carried out on the target image, namely, the foreground area and the background area of the target image are separated, the background area in the target image is extracted, then the first feature corresponding to the background area is extracted, and the first analysis result corresponding to the background area is calculated based on the first feature. According to the application, at least one target image can be selected from a plurality of original images, the background area is extracted from the target image, and the first characteristic corresponding to the background area is calculated to obtain the first analysis result, so that the accuracy and the practicability of evaluating the image background in the related technology are improved.
In another embodiment of the present application, as shown in fig. 5, the present application further provides an image analysis apparatus. The device comprises a selection module 501, an extraction module 502 and a calculation module 503, wherein:
A selection module 501, configured to select at least one target image based on a face recognition result of at least one original image, where the target image includes a human body region;
the extracting module 502 is configured to perform semantic segmentation on the target image, and extract a background area in the target image;
A calculating module 503, configured to extract a first feature corresponding to the background area, and calculate a first analysis result of the background area based on the first feature.
Optionally, the apparatus further comprises:
a second module 504, configured to identify at least one object in the target image, and determine a second analysis result of the object based on the identification result;
A third module 505, configured to determine a third analysis result of the target image based on the first analysis result and the second analysis result.
Optionally, the second module 504 includes:
a first unit for extracting at least one object region in the target image;
a second unit, configured to extract a second feature corresponding to the object region, and determine type information and/or attribute information of the object based on the second feature;
And a third unit for determining the second analysis result based on the type information and/or attribute information.
Optionally, the calculating module 503 includes:
A fourth unit, configured to determine, based on the first feature, a color type and a color number corresponding to the background area;
and a fifth unit configured to determine the first analysis result based on the color type and the number of colors.
Optionally, the calculating module 503 includes:
a calculating unit, configured to calculate a matching degree of the first feature and a third feature corresponding to the human body region;
And the determining unit is used for determining the first analysis result based on the matching degree.
Optionally, the selection module 501 includes:
The detection unit is used for detecting whether the original image contains a face image or not by using a preset face recognition model;
The position determining unit is used for acquiring the size ratio of the face image to the corresponding original image and the position of the face image in the original image when the face image is determined to be contained in the original image;
And the screening unit is used for screening the original image meeting the preset condition as the target image based on the size duty ratio and the position.
When the scheme of the embodiment of the application is executed, face recognition is carried out on an original image, at least one target image is selected based on the face recognition result of at least one original image, the target image comprises a human body area, semantic segmentation is carried out on the target image, namely, the foreground area and the background area of the target image are separated, the background area in the target image is extracted, then the first feature corresponding to the background area is extracted, and the first analysis result corresponding to the background area is calculated based on the first feature. According to the application, at least one target image can be selected from a plurality of original images, the background area is extracted from the target image, and the first characteristic corresponding to the background area is calculated to obtain the first analysis result, so that the accuracy and the practicability of evaluating the image background in the related technology are improved.
Fig. 6 is a block diagram of a logic structure of an electronic device, according to an example embodiment. For example, the electronic device 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, an electronic device 600 may include one or more of the following components: a processor 601 and a memory 602.
Processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 601 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). Processor 601 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 601 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 601 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
The memory 602 may include one or more computer-readable storage media, which may be non-transitory. The memory 602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 602 is used to store at least one instruction for execution by processor 601 to implement the interactive special effect calibration method provided by the method embodiments of the present application.
In some embodiments, the electronic device 600 may further optionally include: a peripheral interface 603, and at least one peripheral. The processor 601, memory 602, and peripheral interface 603 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 603 via buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 604, a touch display 605, a camera 606, audio circuitry 607, a positioning component 608, and a power supply 609.
Peripheral interface 603 may be used to connect at least one Input/Output (I/O) related peripheral to processor 601 and memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 601, memory 602, and peripheral interface 603 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 604 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 604 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 604 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 604 may also include NFC (NEARFIELD COMMUNICATION ) related circuits, which the present application is not limited to.
The display screen 605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 605 is a touch display, the display 605 also has the ability to collect touch signals at or above the surface of the display 605. The touch signal may be input as a control signal to the processor 601 for processing. At this point, the display 605 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 605 may be one, providing a front panel of the electronic device 600; in other embodiments, the display screen 605 may be at least two, respectively disposed on different surfaces of the electronic device 600 or in a folded design; in still other embodiments, the display 605 may be a flexible display disposed on a curved surface or a folded surface of the electronic device 600. Even more, the display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display 605 may be made of LCD (Liquid CRYSTAL DISPLAY), OLED (Organic Light-Emitting Diode), or other materials.
The camera assembly 606 is used to capture images or video. Optionally, the camera assembly 606 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 607 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 601 for processing, or inputting the electric signals to the radio frequency circuit 604 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple and separately disposed at different locations of the electronic device 600. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 601 or the radio frequency circuit 604 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 607 may also include a headphone jack.
The location component 608 is utilized to locate the current geographic location of the electronic device 600 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 608 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, the Granati system of Russia, or the Galileo system of the European Union.
The power supply 609 is used to power the various components in the electronic device 600. The power source 609 may be alternating current, direct current, disposable battery or rechargeable battery. When the power source 609 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 600 further includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyroscope sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.
The acceleration sensor 611 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the electronic device 600. For example, the acceleration sensor 611 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 601 may control the touch display screen 605 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 611. The acceleration sensor 611 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 612 may detect a body direction and a rotation angle of the electronic device 600, and the gyro sensor 612 may cooperate with the acceleration sensor 611 to collect a 3D motion of the user on the electronic device 600. The processor 601 may implement the following functions based on the data collected by the gyro sensor 612: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 613 may be disposed at a side frame of the electronic device 600 and/or at an underlying layer of the touch screen 605. When the pressure sensor 613 is disposed on a side frame of the electronic device 600, a grip signal of the user on the electronic device 600 may be detected, and the processor 601 performs a left-right hand recognition or a shortcut operation according to the grip signal collected by the pressure sensor 613. When the pressure sensor 613 is disposed at the lower layer of the touch display screen 605, the processor 601 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 605. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 614 is used to collect a fingerprint of a user, and the processor 601 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 614 may be provided on the front, back, or side of the electronic device 600. When a physical key or vendor Logo is provided on the electronic device 600, the fingerprint sensor 614 may be integrated with the physical key or vendor Logo.
The optical sensor 615 is used to collect ambient light intensity. In one embodiment, processor 601 may control the display brightness of touch display 605 based on the intensity of ambient light collected by optical sensor 615. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display screen 605 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 605 is turned down. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 based on the ambient light intensity collected by the optical sensor 615.
A proximity sensor 616, also referred to as a distance sensor, is typically provided on the front panel of the electronic device 600. The proximity sensor 616 is used to capture the distance between the user and the front of the electronic device 600. In one embodiment, when the proximity sensor 616 detects a gradual decrease in the distance between the user and the front of the electronic device 600, the processor 601 controls the touch display 605 to switch from the bright screen state to the off screen state; when the proximity sensor 616 detects that the distance between the user and the front of the electronic device 600 gradually increases, the processor 601 controls the touch display 605 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 6 is not limiting of the electronic device 600 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, there is also provided a non-transitory computer readable storage medium including instructions, such as memory 604 including instructions, executable by processor 620 of electronic device 600 to perform a method of selecting an image as described above, the method comprising: extracting a background area image in an image to be selected, wherein the image to be selected comprises a user image and the background area image; screening a first background image which accords with a first preset condition in the background area image based on a first color parameter and a definition parameter; screening second background images meeting second preset conditions in the first background images based on object images contained in each first background image; and taking the image to be selected corresponding to the second background image as a target image. Optionally, the above instructions may also be executed by the processor 620 of the electronic device 600 to perform the other steps involved in the above-described exemplary embodiments. Optionally, the above instructions may also be executed by the processor 620 of the electronic device 600 to perform the other steps involved in the above-described exemplary embodiments. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, there is also provided an application/computer program product comprising one or more instructions executable by the processor 620 of the electronic device 600 to perform the above-described image analysis method, the method comprising: selecting at least one target image based on the face recognition result of the at least one original image, wherein the target image comprises a human body area; carrying out semantic segmentation on the target image, and extracting a background area in the target image; and extracting a first characteristic corresponding to the background area, and calculating a first analysis result of the background area based on the first characteristic. Optionally, the above instructions may also be executed by the processor 620 of the electronic device 600 to perform the other steps involved in the above-described exemplary embodiments. Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (7)

1. A method of image analysis, the method comprising:
Selecting at least one target image based on the face recognition result of the at least one original image, wherein the target image comprises a human body area;
carrying out semantic segmentation on the target image, and extracting a background area in the target image;
extracting a first feature corresponding to the background area, and calculating a first analysis result of the background area based on the first feature, wherein the first analysis result is used for representing whether color information of an original image corresponding to the background area meets the condition of a preset target image;
the calculating a first analysis result of the background region based on the first feature includes:
Calculating the matching degree of the first feature and a third feature corresponding to the human body region, wherein the first feature and the third feature comprise color parameters; determining the first analysis result based on the matching degree;
The method further comprises the steps of:
Extracting at least one object region in the target image;
Extracting a second feature corresponding to the object region, and determining type information and/or attribute information of the object based on the second feature;
Determining a second analysis result based on the type information and/or the attribute information, wherein the second analysis result is used for representing whether an original image corresponding to the object region meets the standard of the preset target image or not;
And determining a third analysis result of the target image based on the first analysis result and the second analysis result, wherein the third analysis result is used for representing whether the original image corresponding to the background area and the object area meets the standard of the preset target image.
2. The method of claim 1, wherein the attribute information includes at least one of color information, quantity information, and size information.
3. The method of claim 1, wherein the calculating the first analysis result of the background region based on the first feature comprises:
based on the first feature, determining the color type and the color number corresponding to the background area;
the first analysis result is determined based on the color category and the number of colors.
4. The method of claim 1, wherein the selecting at least one target image based on the face recognition result of the at least one original image comprises:
Detecting whether the original image contains a face image or not by using a preset face recognition model;
when the original image is determined to contain a face image, acquiring the size ratio of the face image to the corresponding original image and the position of the face image in the original image;
and screening an original image meeting preset conditions as the target image based on the size duty ratio and the position.
5. An image analysis apparatus, the apparatus comprising:
the recognition module is used for selecting at least one target image based on the face recognition result of the at least one original image, wherein the target image comprises a human body area;
The extraction module is used for carrying out semantic segmentation on the target image and extracting a background area in the target image;
the computing module is used for extracting first features corresponding to the background area, computing a first analysis result of the background area based on the first features, wherein the first analysis result is used for representing whether an original image corresponding to the background area meets the condition of a preset target image or not;
The computing module is specifically configured to compute a matching degree of the first feature and a third feature corresponding to the human body region, where the first feature and the third feature include color parameters;
determining the first analysis result based on the matching degree;
A second module for extracting at least one object region in the target image; extracting a second feature corresponding to the object region, and determining type information and/or attribute information of the object based on the second feature; determining a second analysis result based on the type information and/or attribute information;
And the third module is used for determining a third analysis result of the target image based on the first analysis result and the second analysis result, wherein the third analysis result is used for representing whether the original image corresponding to the background area and the object area meets the standard of the preset target image.
6. An electronic device, comprising:
a memory for storing executable instructions; and
A processor for displaying with the memory to execute the executable instructions to perform the operations of the image analysis method of any one of claims 1-4.
7. A computer readable storage medium storing computer readable instructions, wherein the instructions when executed perform the operations of the image analysis method of any one of claims 1-4.
CN202010121619.3A 2020-02-26 2020-02-26 Image analysis method, device, electronic equipment and medium Active CN111445439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010121619.3A CN111445439B (en) 2020-02-26 2020-02-26 Image analysis method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010121619.3A CN111445439B (en) 2020-02-26 2020-02-26 Image analysis method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111445439A CN111445439A (en) 2020-07-24
CN111445439B true CN111445439B (en) 2024-05-07

Family

ID=71648814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010121619.3A Active CN111445439B (en) 2020-02-26 2020-02-26 Image analysis method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111445439B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269141B (en) * 2021-06-18 2023-09-22 浙江机电职业技术学院 Image processing method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887518A (en) * 2010-06-17 2010-11-17 北京交通大学 Human detecting device and method
CN103973977A (en) * 2014-04-15 2014-08-06 联想(北京)有限公司 Blurring processing method and device for preview interface and electronic equipment
CN105894458A (en) * 2015-12-08 2016-08-24 乐视移动智能信息技术(北京)有限公司 Processing method and device of image with human face
CN105981368A (en) * 2014-02-13 2016-09-28 谷歌公司 Photo composition and position guidance in an imaging device
CN106331492A (en) * 2016-08-29 2017-01-11 广东欧珀移动通信有限公司 Image processing method and terminal
CN107590461A (en) * 2017-09-12 2018-01-16 广东欧珀移动通信有限公司 Face identification method and Related product
CN108616689A (en) * 2018-04-12 2018-10-02 Oppo广东移动通信有限公司 High-dynamic-range image acquisition method, device based on portrait and equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887518A (en) * 2010-06-17 2010-11-17 北京交通大学 Human detecting device and method
CN105981368A (en) * 2014-02-13 2016-09-28 谷歌公司 Photo composition and position guidance in an imaging device
CN103973977A (en) * 2014-04-15 2014-08-06 联想(北京)有限公司 Blurring processing method and device for preview interface and electronic equipment
CN105894458A (en) * 2015-12-08 2016-08-24 乐视移动智能信息技术(北京)有限公司 Processing method and device of image with human face
CN106331492A (en) * 2016-08-29 2017-01-11 广东欧珀移动通信有限公司 Image processing method and terminal
CN107590461A (en) * 2017-09-12 2018-01-16 广东欧珀移动通信有限公司 Face identification method and Related product
CN108616689A (en) * 2018-04-12 2018-10-02 Oppo广东移动通信有限公司 High-dynamic-range image acquisition method, device based on portrait and equipment

Also Published As

Publication number Publication date
CN111445439A (en) 2020-07-24

Similar Documents

Publication Publication Date Title
CN111541907B (en) Article display method, apparatus, device and storage medium
CN112870707B (en) Virtual object display method in virtual scene, computer device and storage medium
CN111506758B (en) Method, device, computer equipment and storage medium for determining article name
CN112749613B (en) Video data processing method, device, computer equipment and storage medium
CN110796005A (en) Method, device, electronic equipment and medium for online teaching monitoring
CN110290426B (en) Method, device and equipment for displaying resources and storage medium
CN110765525B (en) Method, device, electronic equipment and medium for generating scene picture
CN112269559B (en) Volume adjustment method and device, electronic equipment and storage medium
CN110675473B (en) Method, device, electronic equipment and medium for generating GIF dynamic diagram
CN111105474B (en) Font drawing method, font drawing device, computer device and computer readable storage medium
CN112308103B (en) Method and device for generating training samples
CN111327819A (en) Method, device, electronic equipment and medium for selecting image
CN109754439B (en) Calibration method, calibration device, electronic equipment and medium
CN112860046B (en) Method, device, electronic equipment and medium for selecting operation mode
CN111445439B (en) Image analysis method, device, electronic equipment and medium
CN111353946A (en) Image restoration method, device, equipment and storage medium
CN110853124B (en) Method, device, electronic equipment and medium for generating GIF dynamic diagram
CN112135191A (en) Video editing method, device, terminal and storage medium
CN111539795A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112989198B (en) Push content determination method, device, equipment and computer-readable storage medium
CN115798417A (en) Backlight brightness determination method, device, equipment and computer readable storage medium
CN113209610B (en) Virtual scene picture display method and device, computer equipment and storage medium
CN114925667A (en) Content classification method, device, equipment and computer readable storage medium
CN112560472B (en) Method and device for identifying sensitive information
CN112907702A (en) Image processing method, image processing device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant