CN114638817A - Image segmentation method and device, electronic equipment and storage medium - Google Patents

Image segmentation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114638817A
CN114638817A CN202210322862.0A CN202210322862A CN114638817A CN 114638817 A CN114638817 A CN 114638817A CN 202210322862 A CN202210322862 A CN 202210322862A CN 114638817 A CN114638817 A CN 114638817A
Authority
CN
China
Prior art keywords
image
human body
target
movement
body frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210322862.0A
Other languages
Chinese (zh)
Inventor
陈如婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202210322862.0A priority Critical patent/CN114638817A/en
Publication of CN114638817A publication Critical patent/CN114638817A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to an image segmentation method and apparatus, an electronic device, and a storage medium, the method including: acquiring a movement parameter corresponding to each human body frame in a first global image, wherein the movement parameter is used for representing a movement rule or a movement trend corresponding to the human body frame; zooming the human body frame according to the movement parameters aiming at each human body frame to obtain a target frame corresponding to the human body frame; determining a first target image according to the target frame corresponding to each human body frame and the first global image; and carrying out image segmentation on the first target image to obtain a human body segmentation result of the first global image. The embodiment of the disclosure improves the image segmentation effect when the human body moves.

Description

Image segmentation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image segmentation method and apparatus, an electronic device, and a storage medium.
Background
Background segmentation is an important problem in the field of computer vision and smart home. Background segmentation models can be used in many fields. For example, in a home entertainment scene, when multiple persons interact, the person image and the background can be segmented based on the background segmentation model, so that the monotonous background can be replaced. In background segmentation, generally, a region of interest (e.g., a region including a human image) is selected from a global image and is used as a target image, and then the target image is segmented.
In the image segmentation process, under the influence of human body motion, situations that a portrait leaves a target image or the detail of portrait segmentation is poor may occur, so that the image segmentation effect is poor. Therefore, how to improve the image segmentation effect during human body movement is a problem to be solved urgently at present.
Disclosure of Invention
The disclosure provides an image segmentation method and device, an electronic device and a storage medium.
According to an aspect of the present disclosure, there is provided an image segmentation method including: acquiring a movement parameter corresponding to each human body frame in a first global image, wherein the movement parameter is used for representing a movement rule or a movement trend corresponding to the human body frame; zooming the human body frame according to the movement parameters aiming at each human body frame to obtain a target frame corresponding to the human body frame; determining a first target image according to the target frame corresponding to each human body frame and the first global image; and carrying out image segmentation on the first target image to obtain a human body segmentation result of the first global image.
The image segmentation method provided by the embodiment of the disclosure can be applied to image segmentation in a single-person scene and can also be applied to image segmentation in a multi-person scene. In the embodiment of the disclosure, the size of the target image can be adjusted in real time according to the movement rule or the movement trend of the human body, so that the target image can timely follow the movement of the human body, the probability that the portrait leaves the target image and the probability that the pixel proportion occupied by the portrait in the target image is too small are reduced in the movement process of the human body, and the image segmentation effect when the human body moves is effectively improved.
In a possible implementation manner, the acquiring a movement parameter corresponding to each human body frame in the first global image includes: performing target detection on the first global image to obtain a target detection result of the first global image, wherein the target detection result is used for indicating the position of a human body frame included in the first global image; determining scene information of the first global image and distance information corresponding to each human body frame in the first global image according to a target detection result of the first global image, wherein the scene information is used for indicating whether the first global image is a global image in a single-person scene or a global image in a multi-person scene, the distance information is used for indicating a distance between a human body in the human body frame and first image acquisition equipment, and the first image acquisition equipment is used for acquiring the first global image; and for each human body frame, obtaining the movement parameters corresponding to the human body frame according to the scene information, the distance information corresponding to the human body frame and a first preset mapping relation, wherein the first preset mapping relation is used for indicating the movement parameters corresponding to the human body frames with different distances in different scenes.
In the embodiment of the disclosure, the movement parameters corresponding to the human body frame in the first global image are obtained based on the scene information of the first global image and the distance information corresponding to the human body frame in the first global image, so that a condition is provided for scaling the human body frame, and the improvement of the image segmentation effect when the human body moves is facilitated.
In one possible implementation, the method further includes: the method comprises the steps of obtaining a first video, wherein the first video corresponds to a first scene, the first scene is a single scene or a multi-person scene, the first video is used for recording the movement condition of a single person or a plurality of persons in a first movement range, the first movement range is a first distance away from a second image acquisition device, and the second image acquisition device is used for acquiring the first video; performing limb tracking on a target person in the first video to obtain the position of a human body frame corresponding to the target person in each frame of image of the first video; determining the moving speed, the moving amplitude and a second distance of the target person according to the position of a human body frame corresponding to the target person in each frame of image of the first video, wherein the second distance is used for indicating the distance between the target person and the reference position of the first video; obtaining a first movement parameter according to the movement speed and the movement amplitude of the target person; and establishing the first preset mapping relation based on the first scene, the second distance and the first movement parameter.
In the embodiment of the disclosure, the historical images are analyzed through limb tracking, so that the mapping relation between the positions of the scene and the target person and the movement parameters is obtained, and a basis is provided for determining the movement parameters corresponding to the human body frame in the first global image.
In one possible implementation, the method further includes: setting the moving frequency of a person in a first video to be greater than a first frequency threshold value and the length of the projection of the first moving range in the x-axis direction of the camera coordinate system of the second image acquisition device to be greater than a first moving threshold value under the condition that the first scene is a single scene; setting the moving frequency of people in a first video to be less than or equal to a second frequency threshold value and the length of the projection of the first moving range in the x-axis direction of the camera coordinate system of the second image acquisition equipment to be less than or equal to the second moving threshold value under the condition that the first scene is a multi-person scene; wherein the second frequency threshold is less than or equal to the first frequency threshold, and the second movement threshold is less than or equal to the first movement threshold.
In the embodiment of the disclosure, a larger movement range and a larger movement frequency are set for a single-person scene, and a smaller movement range and a smaller movement frequency are set for a multi-person scene, so that the movement rule of a person is fitted with an actual scene, the accuracy of the movement parameters corresponding to the human body frame is improved, and the image segmentation effect when the human body moves is favorably improved.
In a possible implementation manner, the scaling the human body frame according to the movement parameter corresponding to the human body frame to obtain the target frame corresponding to the human body frame includes: determining a scaling coefficient of the human body frame according to the movement parameter corresponding to the human body frame and a second preset mapping relation; and zooming the human body frame according to the zooming coefficient to obtain a target frame corresponding to the human body frame, wherein the second preset mapping relation is used for indicating the zooming coefficients corresponding to different movement parameters.
In a possible implementation manner, the determining a first target image according to the target frame corresponding to each human body frame and the first global image includes: merging the target frames corresponding to the human body frames to obtain merged frames; obtaining the first target image according to the merging frame and the first global image; the first target image corresponds to a merging frame with the smallest area in merging frames capable of covering target frames corresponding to all human body frames.
In the embodiment of the disclosure, the first global image is acquired based on the merging frame with the smallest area in the merging frames capable of covering the target frames corresponding to all the human body frames, so as to obtain the first target image. Therefore, the possibility that the proportion of the portrait in the first target image is too low can be reduced, and the image segmentation effect is favorably improved.
In a possible implementation manner, the performing image segmentation on the first target image to obtain a human body segmentation result of the first global image includes: acquiring a second target image corresponding to a second global image, wherein the second global image is a previous frame image of the first global image in the video, and the second target image represents a target image adopted when a human body segmentation result of the second global image is acquired; determining a magnitude of movement of the first target image relative to the second target image; and under the condition that the movement amplitude of the first target image relative to the second target image is larger than a first amplitude threshold value, carrying out image segmentation on the first target image to obtain a human body segmentation result of the first global image.
Therefore, the position of the target image is kept relatively stable, and the stability of the segmentation effect is improved.
In one possible implementation, the method further includes: and under the condition that the movement amplitude of the first target image relative to the second target image is smaller than or equal to the first amplitude threshold value, performing image segmentation by using the second target image to obtain a human body segmentation result of the first global image.
In a possible implementation manner, the performing image segmentation on the first target image to obtain a human body segmentation result of the first global image includes: acquiring a second target image corresponding to a second global image, wherein the second global image and the first global image belong to the same video, the second global image is a previous frame image of the first global image, and the second target image represents a target image adopted when a human body segmentation result of the second global image is acquired; determining a coverage of the first target image relative to the second target image; and under the condition that the coverage rate is smaller than a second amplitude threshold value, carrying out image segmentation on the first target image to obtain a human body segmentation result of the first global image.
Therefore, the target image can be updated in time, and the image segmentation effect is improved.
In one possible implementation, the method further includes:
and under the condition that the coverage rate is greater than or equal to the second amplitude threshold value, performing image segmentation by using the second target image to obtain a human body segmentation result of the first global image.
According to an aspect of the present disclosure, there is provided an image segmentation apparatus including:
the first obtaining module is used for obtaining a movement parameter corresponding to each human body frame in a first global image, and the movement parameter is used for representing a movement rule or a movement trend corresponding to the human body frame;
the zooming module is used for zooming the human body frames according to the movement parameters acquired by the first acquisition module aiming at each human body frame to obtain target frames corresponding to the human body frames;
the first determining module is used for determining a first target image according to the target frame corresponding to each human body frame obtained by zooming of the zooming module and the first global image;
and the first segmentation module is used for carrying out image segmentation on the first target image determined by the first determination module to obtain a human body segmentation result of the first global image.
In a possible implementation manner, the first obtaining module is further configured to:
performing target detection on the first global image to obtain a target detection result of the first global image, wherein the target detection result is used for indicating the position of a human body frame included in the first global image;
determining scene information of the first global image and distance information corresponding to each human body frame in the first global image according to a target detection result of the first global image, wherein the scene information is used for indicating whether the first global image is a global image in a single-person scene or a global image in a multi-person scene, the distance information is used for indicating a distance between a human body in the human body frame and first image acquisition equipment, and the first image acquisition equipment is used for acquiring the first global image;
and aiming at each human body frame, obtaining the movement parameters corresponding to the human body frame according to the scene information, the distance information corresponding to the human body frame and a first preset mapping relation, wherein the first preset mapping relation is used for indicating the movement parameters corresponding to the human body frames with different distances in different scenes.
In one possible implementation, the apparatus further includes:
the second acquisition module is used for acquiring a first video, wherein the first video corresponds to a first scene, the first scene is a single scene or a multi-person scene, and the first video is used for recording the movement condition of the single person or the multi-person within a first movement range, the first movement range is a first distance away from a second image acquisition device, and the second image acquisition device is used for acquiring the first video;
the tracking module is used for carrying out limb tracking on a target person in the first video to obtain the position of a human body frame corresponding to the target person in each frame of image of the first video;
a second determining module, configured to determine, according to a position of a human frame corresponding to the target person in each frame of image of the first video, a moving speed, a moving amplitude, and a second distance of the target person, where the second distance is used to indicate a distance between the target person and a reference position of the first video;
the third acquisition module is used for acquiring a first movement parameter according to the movement speed and the movement amplitude of the target person;
and the establishing module is used for establishing the first preset mapping relation based on the first scene, the second distance and the first moving parameter.
In one possible implementation, the apparatus further includes:
the first setting module is used for setting that the movement frequency of a person in a first video is greater than a first frequency threshold value and the length of the projection of the first movement range in the x-axis direction of the camera coordinate system of the second image acquisition device is greater than a first movement threshold value under the condition that the first scene is a single-person scene;
a second setting module, configured to set, when the first scene is a multi-person scene, a movement frequency of a person in a first video to be less than or equal to a second frequency threshold, and a length of a projection of the first movement range in an x-axis direction of a camera coordinate system of the second image capturing device to be less than or equal to a second movement threshold;
wherein the second frequency threshold is less than or equal to the first frequency threshold, and the second movement threshold is less than or equal to the first movement threshold.
In one possible implementation, the scaling module is further configured to:
determining a scaling coefficient of the human body frame according to the movement parameter corresponding to the human body frame and a second preset mapping relation;
and zooming the human body frame according to the zooming coefficient to obtain a target frame corresponding to the human body frame, wherein the second preset mapping relation is used for indicating the zooming coefficients corresponding to different movement parameters.
In one possible implementation manner, the first determining module is further configured to:
merging the target frames corresponding to the human body frames to obtain merged frames;
obtaining the first target image according to the merging frame and the first global image;
the first target image corresponds to a merging frame with the smallest area in merging frames capable of covering target frames corresponding to all human body frames.
In one possible implementation, the first segmentation module is further configured to:
acquiring a second target image corresponding to a second global image, wherein the second global image is a previous frame image of the first global image in the video, and the second target image represents a target image adopted when a human body segmentation result of the second global image is acquired;
determining a magnitude of movement of the first target image relative to the second target image;
and under the condition that the movement amplitude of the first target image relative to the second target image is larger than a first amplitude threshold value, performing image segmentation on the first target image to obtain a human body segmentation result of the first global image.
In one possible implementation, the apparatus further includes:
and the second segmentation module is used for performing image segmentation by adopting the second target image under the condition that the movement amplitude of the first target image relative to the second target image is smaller than or equal to the first amplitude threshold value, so as to obtain a human body segmentation result of the first global image.
In one possible implementation, the first segmentation module is further configured to:
acquiring a second target image corresponding to a second global image, wherein the second global image and the first global image belong to the same video, the second global image is a previous frame image of the first global image, and the second target image represents a target image adopted when a human body segmentation result of the second global image is acquired;
determining a coverage of the first target image relative to the second target image;
and under the condition that the coverage rate is smaller than a second amplitude threshold value, carrying out image segmentation on the first target image to obtain a human body segmentation result of the first global image.
In one possible implementation, the apparatus further includes:
and the third segmentation module is used for performing image segmentation by adopting the second target image under the condition that the coverage rate is greater than or equal to the second amplitude threshold value to obtain a human body segmentation result of the first global image.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of an image segmentation method according to an embodiment of the present disclosure;
FIG. 2 illustrates an exemplary diagram of a human body box and a target box in an embodiment of the disclosure;
FIG. 3 illustrates an exemplary diagram of a target box and a merge box in an embodiment of the disclosure;
FIG. 4 shows a block diagram of an image segmentation apparatus according to an embodiment of the present disclosure;
FIG. 5 shows a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure;
fig. 6 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
In scenes such as home entertainment, singing with a microphone, an online class, and online conferences, a background matched with the scene needs to be set to improve the sense of substitution, so that a human body and the background are segmented from a global image acquired by image acquisition equipment (for example, equipment with photographing or video recording functions such as a camera, a video camera, a mobile phone, a tablet and the like). When segmentation is performed, a target image including a portrait (i.e., an image corresponding to a human body) is generally selected from the global image, and then the target image is input into the background segmentation model to perform segmentation of the portrait and the background image. The segmentation effect of the target image is related to the proportion of pixels occupied by the portrait in the target image. If the selected target image is too small, the situation that the portrait leaves the target image when the human body moves is easy to occur; if the selected target image is too large, the situation that the foreground segmentation details are poor due to the fact that the proportion of the human image in the target image is too low is easy to occur.
The image segmentation method provided by the embodiment of the disclosure can be applied to image segmentation in a single-person scene and can also be applied to image segmentation in a multi-person scene. In the embodiment of the disclosure, the size of the target image can be adjusted in real time according to the movement rule or the movement trend of the human body, so that the target image can timely follow the movement of the human body, the probability that the portrait leaves the target image and the probability that the pixel proportion occupied by the portrait in the target image is too small are reduced in the movement process of the human body, and the image segmentation effect when the human body moves is effectively improved.
Fig. 1 shows a flow chart of an image segmentation method according to an embodiment of the present disclosure. The image segmentation method may be performed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server. As shown in fig. 1, the image segmentation method includes:
in step S11, a movement parameter corresponding to each human body frame in the first global image is acquired.
The global image is an image containing a person. The global image may include one person or a plurality of persons, which is not limited by the embodiments of the present disclosure. The global image may be obtained by image capture equipment capturing an image of a person in a certain spatial range, may also be an image frame including the person obtained from a video, and may also be obtained by other ways, which is not specifically limited in this embodiment of the present disclosure.
The first global image in step S11 may be used to represent a global image currently to be subjected to image segmentation. In the embodiment of the present disclosure, after the first global image is input into the target detection model in the related art, the target detection model may output a target detection result of the first global image. The target detection result may be used to indicate a position of a human body frame included in the first global image. The target detection model may be a convolutional neural network model, and the structure and the training process of the target detection model are not limited in the embodiment of the present disclosure.
It can be understood that the human body included in the first global image corresponds to the human body frame indicated by the target detection result one to one. Therefore, when one human body is included in the first global image, the movement parameter corresponding to one human body frame may be acquired in step S11. When a plurality of human bodies are included in the first global image, the movement parameters corresponding to the plurality of human body frames may be acquired in step S11.
The movement parameter corresponding to the body frame can be used for representing a movement rule or a movement trend corresponding to the body frame. The movement rule corresponding to the human body frame can be used for reflecting the movement condition of the human body obtained based on historical image analysis. The corresponding movement trend of the human body frame can be used for reflecting the possible future movement situation of the human body.
In one possible implementation manner, the movement parameters of the human body frame include, but are not limited to, parameters reflecting the movement condition of the human body, such as the movement speed, the movement amplitude, and the movement direction of the human body frame. Hereinafter, a detailed description will be given to a specific process of obtaining the movement parameter corresponding to each human body frame, which is not described herein again, in combination with possible implementation manners of the embodiments of the present disclosure.
In step S12, for each human body frame, scaling the human body frame according to the movement parameter to obtain a target frame corresponding to the human body frame.
The movement of the human body relative to the image acquisition device can be decomposed into left-right movement and/or front-back movement. When the human body moves left and right relative to the image acquisition device, if the size of the target image is kept unchanged, the situation that the human figure leaves the target image may occur. When a human body approaches the image acquisition device, the human body may leave the shooting view of the image acquisition device, and at this time, if the size of the target image is kept unchanged, a situation that the target image does not include a portrait or includes a portrait incompletely occurs. When the human body is far away from the image acquisition equipment, if the size of the target image is kept unchanged, the proportion of the pixels occupied by the human image in the target image is too small. It is contemplated that the movement parameters of the body frame may represent the movement of the body. Therefore, in the embodiment of the present disclosure, the human body frame may be scaled according to the movement parameter corresponding to the human body frame to obtain the target frame corresponding to the human body frame, so that on one hand, the possibility that the human body leaves the target frame when moving is reduced, and on the other hand, the possibility that the ratio of the portrait in the target frame is too small is reduced.
Scaling the body frame in the embodiments of the present disclosure includes contracting the body frame or expanding the body frame. The target frame is obtained by contracting the human body frame, so that the occupation ratio of the portrait in the target frame can be improved, the accuracy of the edge in the subsequent image segmentation can be improved, and the image segmentation effect can be improved; the human body frame is expanded to obtain the target frame, so that the possibility that the human body moves out of the target frame can be reduced, and the human body is still in the target frame even if the human body moves correspondingly within a period of time, and the image segmentation effect is improved.
In a possible implementation manner, the scaling the human body frame according to the movement parameter corresponding to the human body frame in step S12 to obtain the target frame corresponding to the human body frame includes: determining a scaling coefficient of the human body frame according to the movement parameter corresponding to the human body frame and a second preset mapping relation; and zooming the human body frame according to the zooming coefficient to obtain a target frame corresponding to the human body frame.
The second preset mapping relationship may be used to indicate scaling coefficients corresponding to different movement parameters. The second preset mapping relationship may be set as needed or empirically. For example, the larger the movement amplitude or the larger the movement speed, the larger the corresponding scaling factor. When the moving direction is far away from the image acquisition equipment, the zoom factor is smaller than 1, and the larger the far amplitude is, the smaller the zoom factor is; when the moving direction is close to the image acquisition equipment, the scaling factor is larger than 1, and the larger the close amplitude is, the larger the scaling factor is. Taking the moving direction as the distance from the image capturing device as an example, that is, in the process that the human body gradually gets away from the image capturing device, the ratio of the portrait in the global image becomes smaller, the human body frame also needs to be correspondingly reduced, and as the human body moves from near to far from the image capturing device, the reduction range of the human body frame also becomes larger, that is, the scaling factor for representing the reduction range of the human body frame becomes smaller. Correspondingly, in the process that the moving direction is close to the image acquisition equipment, the ratio of the human body in the global image is increased, the human body frame is enlarged, and the scaling coefficient is increased.
The image coordinate system of the global image takes the center of the global image as the coordinate origin, the x axis of the image coordinate system is parallel to the upper side and the lower side of the global image, and the y axis of the image coordinate system is parallel to the left side and the right side of the global image. In one possible implementation, the scaling factor includes a scaling factor in an x-axis direction of an image coordinate system of the global image and a scaling factor in a y-axis direction of the image coordinate system of the global image. For example, in a single-person scene, a human body moves more left and right and moves less front and back, that is, the moving range of the human body frame in the x-axis direction of the image coordinate system of the global image is larger, and the moving range in the y-axis direction of the image coordinate system of the global image is smaller, so that the scaling factor in the x-axis direction of the image coordinate system of the global image set for the human body frame is larger, and the scaling factor in the y-axis direction is smaller. In one example, a person is jumping left and right facing the image capture device, where the zoom factor in the x-axis direction of the image coordinate system of the global image is 1.2 and the zoom factor in the y-axis direction is 1.0.
FIG. 2 shows an exemplary diagram of a human body box and a target box in an embodiment of the disclosure. As shown in fig. 2, the scaling factor in the x-axis direction of the image coordinate system of the global image is 2, the scaling factor in the y-axis direction of the image coordinate system of the global image is 1.5, and after scaling, the length of the target frame is 2 times the length of the human body frame in the x-axis direction of the image coordinate system of the global image, and the width of the target frame is 1.5 times the width of the human body frame in the y-axis direction of the image coordinate system of the global image.
In one possible implementation, the scaling factor includes an expansion factor and a contraction factor. When the scaling factor is greater than or equal to 1, the scaling factor may be referred to as an expansion factor, and the target frame may be obtained by expanding the human body frame at this time, that is, the area of the target frame is greater than or equal to the area of the human body frame. When the scaling factor is smaller than 1, the scaling factor may be referred to as a contraction factor, and the target frame may be obtained by contracting the human body frame at this time, that is, the area of the target frame is smaller than the area of the human body frame.
In step S13, a first target image is determined according to the target frame corresponding to each human body frame and the first global image.
The first target image may represent an image that is subsequently used for image segmentation. In the embodiment of the present disclosure, the first global image may be cropped according to the position of the target frame corresponding to each human body frame, so as to obtain the first target image.
In a possible implementation manner, the step S13 may include merging the target frames corresponding to the human body frames to obtain a merged frame; and obtaining the first target image according to the merging frame and the first global image.
The merging box represents the merging result of the target box corresponding to each human body box. The first target image corresponds to a merged box having the smallest area among merged boxes capable of covering target boxes corresponding to all human body boxes.
Step S13 will be described below with reference to a single-person scenario and a multi-person scenario, respectively.
In a single scene, the motion parameters corresponding to one frame may be acquired in step S11, and the target frame corresponding to one frame may be acquired in step S12, so that in step S13, the first target image may be cut out from the first global image according to the position of the one frame in the first global image.
In a multi-person scenario, in step S11, the motion parameters corresponding to a plurality of frames may be obtained, and in step S12, the target frames corresponding to a plurality of frames may be obtained, so that in step S13, the target frames corresponding to the frames need to be merged to obtain a merged frame, and then the first target image is cut out from the first target image according to the position of the merged frame. Considering that the ratio of pixels where the portrait is in the first target image is too low, which may cause foreground segmentation details, the area of the merge box is not suitable to be too large. Therefore, in the embodiment of the present disclosure, the first global image is acquired based on the merge box with the smallest area among the merge boxes that can cover the target boxes corresponding to all the human body boxes, so as to obtain the first target image. Therefore, the possibility that the proportion of the portrait in the first target image is too low can be reduced, and the image segmentation effect is favorably improved.
It should be noted that, in a multi-user scenario, it is optional to merge multiple target boxes to obtain a merged box. That is, in a multi-person scene, after target frames corresponding to a plurality of human body frames are obtained, target images may be obtained based on the target frames, and then the target images are subjected to image segmentation, so that a human body segmentation result of the first global image may also be obtained.
FIG. 3 shows an exemplary diagram of a target box and a merge box in an embodiment of the disclosure. As shown in fig. 3, three target frames are obtained based on the first global image, each target frame corresponding to a human body. And combining the three target frames to obtain a combined frame, wherein the combined frame can cover all the target frames. Taking the two merge boxes shown in fig. 3 as an example, the merge box with the smallest area is selected from all the merge boxes for image cropping, so that the first target image can be obtained.
In the related art, for a multi-person scene, a target image corresponding to each human body needs to be acquired based on a target frame corresponding to each human body, and then image segmentation processing is performed on the target image corresponding to each human body. Therefore, the image processing under the multi-user scene is multiplied, pressure is applied to a chip for image segmentation processing, the computational power of the chip is insufficient, the processing speed is reduced, and meanwhile, a user cannot run other functional modules on the chip in parallel, so that the user experience is greatly reduced.
In the embodiment of the present disclosure, all the target frames are combined in a multi-person scene, so as to obtain one target image, and then only one target image needs to be subjected to image segmentation processing. Therefore, the image segmentation processing required to be carried out for multiple times in the multi-person scene is converted into single image segmentation processing, so that the resources and time consumed in the image segmentation processing in the multi-person scene and the single-person scene are equivalent, the efficiency is improved, the resources are saved, and the user experience is improved.
In step S14, image segmentation is performed on the first target image to obtain a human body segmentation result of the first global image.
In the embodiment of the present disclosure, after the first target image is input into the background segmentation model, a human body segmentation result of the first target image may be obtained, where the human body segmentation result of the first target image indicates whether each pixel point in the first target image is a human body or a non-human body. According to the human body segmentation result of the first target image and the human body segmentation result of the first target image in the first global image, the human body segmentation result of the first global image indicates whether each pixel point in the first global image is a human body or a non-human body. The background segmentation model may refer to related technologies, which are not described herein any more, and may be a neural network model, and the like.
In the embodiment of the disclosure, the size of the target image can be adjusted in real time according to the movement parameters of the human body, so that the target image can timely follow the movement of the human body, the probability that the human body leaves the target image in the movement process of the human body and the probability that the pixel proportion occupied by the portrait in the target image is too small are reduced, and the image segmentation effect when the human body moves is effectively improved.
Considering that whether the scene is a single-person scene or a multi-person scene, if a new target image is adopted to perform image segmentation every time a human body occurs, the segmentation effect is unstable. In order to keep the position of the target image relatively stable, in the embodiment of the present disclosure, the target image may be smoothed. The specific procedure of the smoothing process is described in detail below.
In one possible implementation, step S14 may include: acquiring a second target image corresponding to the second global image; determining a magnitude of movement of the first target image relative to the second target image; and under the condition that the movement amplitude of the first target image relative to the second target image is larger than a first amplitude threshold value, carrying out image segmentation on the first target image to obtain a human body segmentation result of the first global image.
The second global image and the first global image belong to the same video, and the second global image is a previous frame image of the first global image. It will be appreciated that the first and second global images are of the same size, resolution, etc.
The second target image represents a target image adopted when the human body segmentation result of the second global image is obtained. And performing image segmentation processing on the second target image to obtain a human body segmentation result of the second global image. The acquisition process of the second target image may refer to the acquisition process of the first target image (step S11 to step S13), and will not be described herein again.
In a possible implementation manner, the moving amplitude of the first target image relative to the second target image may be determined according to a coordinate difference between a preset position (e.g., a lower left corner vertex, an upper right corner vertex, or a central point) of the first target image and a preset position of the second target image. In one example, the coordinates of the vertex at the lower left corner of the first target image in the first global image are (100 ), the coordinates of the vertex at the lower left corner of the second target image in the second global image are (200, 100), and the movement amplitude of the first target image relative to the second target image is 100 pixels.
The first amplitude threshold may be set as needed, for example, the first amplitude threshold may be set to 50 pixels or 150 pixels, etc. When the moving amplitude of the first target image relative to the second target image is greater than the first amplitude threshold, it indicates that the human body has moved by a relatively large amplitude, and at this time, in order to improve the image segmentation effect, the first target image may be subjected to image segmentation to obtain a human body segmentation result of the first global image.
In one possible implementation, the method may further include: and under the condition that the movement amplitude of the first target image relative to the second target image is smaller than or equal to the first amplitude threshold value, performing image segmentation by using the second target image to obtain a human body segmentation result of the first global image.
When the moving amplitude of the first target image relative to the second target image is smaller than or equal to the first amplitude threshold, it indicates that the moving amplitude of the human body is smaller, and at this time, in order to improve the stability of the image subjected to image segmentation, image segmentation may be performed on the second target image to obtain a human body segmentation result of the first global image.
Considering that the relative movement amplitude between the target images corresponding to the adjacent global images is small under the condition that the human body moves slowly, this may result in that the target images adopted in the image segmentation are not updated in time. In order to update the target image in time, in the embodiment of the present disclosure, the target image may be subjected to update processing. The following describes in detail a specific procedure of the update processing.
In one possible implementation, step S14 may include: acquiring a second target image corresponding to the second global image; determining a coverage of the first target image relative to the second target image; and under the condition that the coverage rate is smaller than a second amplitude threshold value, carrying out image segmentation on the first target image to obtain a human body segmentation result of the first global image.
In one example, a ratio of an overlapping area of the first target image and the second target image to an area of the second target image may be determined as a coverage ratio of the first target image with respect to the second target image.
The second amplitude threshold may be set as desired, for example, the second amplitude threshold may be 40% or 50%, etc. When the coverage rate of the first target image relative to the second target image is smaller than the second amplitude threshold, it indicates that the human body has moved to a larger extent, and the target image for image segmentation needs to be updated.
In one possible implementation, the method may further include: and under the condition that the coverage rate is greater than or equal to the second amplitude threshold value, performing image segmentation by using the second target image to obtain a human body segmentation result of the first global image.
When the coverage rate of the first target image relative to the second target image is greater than or equal to the second amplitude threshold, it indicates that any movement amplitude is small, and at this time, in order to maintain the stability of the portrait, image segmentation may be performed on the second target image, so as to obtain a human segmentation result of the first global image. Therefore, the target image used when the previous frame image is used for image segmentation is used as the target image used when the current frame image is used for image segmentation, so that the target image used for image segmentation is the same image, the correspondingly obtained image segmentation result cannot be changed, the obtained portrait cannot be changed, the shaking of the portrait is avoided, the stability of the portrait is maintained, and the user experience is improved.
The following describes in detail a specific process of obtaining the movement parameter corresponding to each human body frame in the first global image. In consideration of the fact that the first preset mapping relation used for indicating the movement parameters corresponding to the human body frames with different distances in different scenes needs to be used in the process, a process of obtaining the first preset mapping relation is described first.
The first preset mapping relationship comprises a scene, a distance between a person in the global image and a reference position of the video and a movement parameter. In one possible implementation, the method further includes: acquiring a first video; performing limb tracking on a target person in the first video to obtain the position of a human body frame corresponding to the target person in each frame of image of the first video; determining the moving speed, the moving amplitude and the second distance of the target person according to the position of the human body frame corresponding to the target person in each frame of image of the first video; obtaining a first movement parameter according to the movement speed and the movement amplitude of the target person; and establishing the first preset mapping relation based on the first scene, the second distance and the first movement parameter.
The first video corresponds to a first scene, the first scene can be a single-person scene or a multi-person scene, the first video is used for recording the movement condition of a single person or multiple persons in a first movement range, the first movement range is a first distance away from the second image acquisition device, and the second image acquisition device represents an image acquisition device for acquiring the first video.
In one example, after the second image capture device is erected, the person may be moved within a first range of movement at a first distance (e.g., 1 meter, 3 meters, or 5 meters, etc.) from the second image capture device. The second image capturing device may capture a moving video of the person as the first video. And tracking the target person in the first video by using a limb tracking technology to obtain the position of the human body frame corresponding to the target person in each frame of image of the first video. And determining the moving speed, the moving amplitude and the second distance of the target person according to the position of the human body frame corresponding to the target person in each frame of image of the first video, so as to obtain a first moving parameter. And establishing the first preset mapping relation based on the first scene, the second distance and the first movement parameter.
Wherein the second distance is indicative of a distance of the target person from a reference location of the first video. Specifically, the distance between the preset position of the target person (for example, the vertex at the lower left corner, the vertex at the upper right corner, or the central point) and the reference position of the first video may be determined as the second distance corresponding to the target person. The reference position of the first video may be a position pre-specified in the first video, such as a lower boundary line of the first video, or an upper boundary line of the first video. The distance between the corresponding target person and the lower boundary line of the first video may be referred to as a second distance, or a distance between the target person and the upper boundary line of the first video. The magnitude of the second distance may be used to characterize the distance between the target person and the second image capture device. Taking the second distance as an example for indicating the distance between the target person and the lower boundary line of the first video, the larger the second distance is, the closer the target person is to the second image capturing device is, and the smaller the second distance is, the farther the target person is from the second image capturing device is. Taking the second distance as an example for indicating the distance between the target person and the upper boundary line of the first video, the larger the second distance is, the farther the target person is from the second image capturing device is indicated, and the smaller the second distance is, the farther the target person is from the second image capturing device is indicated.
In one possible implementation, the method further includes: under the condition that the first scene is a single scene, setting the movement frequency of a person in a first video to be greater than a first frequency threshold value, and setting the projection length of the first movement range in the x-axis direction of a camera coordinate system of second image acquisition equipment to be greater than a first movement threshold value; setting the moving frequency of people in a first video to be less than or equal to a second frequency threshold value and the length of the projection of the first moving range in the x-axis direction of a camera coordinate system of second image acquisition equipment to be less than or equal to the second moving threshold value under the condition that the first scene is a multi-person scene; wherein the second frequency threshold is less than or equal to the first frequency threshold, and the second movement threshold is less than or equal to the first movement threshold. In one example, the first movement range may be a rectangle, and the shortest distance between the lower edge of the first movement range and the second image capturing device may be determined as the first distance. In yet another example, the first movement range may be a fan ring (i.e., a portion of a circular ring) centered at the second image capturing device, where an inner radius of the first movement range may be determined as the first distance.
It can be understood that, because the limitation on the movement of the person in the single-person scene is small and the movement space is large, the movement range of the person in the single-person scene is large, and the movement range of the person in the multi-person scene is small. The smaller the first distance is, the larger the moving amplitude and the moving speed are; the larger the first distance, the smaller the movement amplitude and the movement speed. Therefore, the moving frequency and the moving range of the person in the first video set in the single scene are larger than those in the first video set in the multi-person scene. In addition, the target person in a multi-person scene may be any one or more of a plurality of persons.
It should be noted that the first frequency threshold, the first movement threshold, the second frequency threshold, and the second movement threshold may be set as needed, and it is only necessary to set the second frequency threshold to be less than or equal to the first frequency threshold, and the second movement threshold to be less than or equal to the first movement threshold.
Thus, a first preset mapping relationship is obtained. On this basis, a process of acquiring the movement parameter corresponding to each human body frame in the first global image is described.
In a possible implementation manner, the step S11 of acquiring the movement parameter corresponding to each human body frame in the first global image may include: carrying out target detection on the first global image to obtain a target detection result of the first global image; determining scene information of the first global image and distance information corresponding to each human body frame in the first global image according to a target detection result of the first global image; and for each human body frame, determining a movement parameter corresponding to the human body frame according to the scene information, the distance information corresponding to the human body frame and a first preset mapping relation.
The target detection result can be used for indicating the position of a human body frame included in the first global image; the scene information may be used to indicate whether the first global image is a global image in a single-person scene or a global image in a multi-person scene; the distance information may be used to indicate a distance between the human body in the human body frame and a first image capturing device, the first image capturing device representing an image capturing device that captures a first global image.
When the target detection result indicates the position of one human body frame, the first global image may be a global image in a single-person scene. When the target detection result indicates the positions of the plurality of human body frames, it may be determined that the first global image is a global image in a multi-person scene.
In one example, the distance information corresponding to the human body frame may be determined according to the position of the human body frame in the first global image. Specifically, the coordinates of the preset position (for example, a vertex at a lower left corner, a vertex at an upper right corner, or a central point, etc.) of the human body frame in the y-axis direction of the first global image may be determined as the distance information corresponding to the human body frame. Taking the coordinate system shown in fig. 2 as an example, the smaller the coordinate value of the center point of the human body frame in the y-axis direction of the first global image is, the closer the distance between the human body and the first image acquisition device is; the larger the coordinate value of the center point of the human body frame in the y-axis direction of the first global image is, the longer the distance between the human body and the first image acquisition device is.
In the embodiment of the present disclosure, a first matching preset mapping relationship may be found according to the scene information and the distance information corresponding to the human body frame, and a movement parameter in the first matching preset mapping relationship is determined as a movement parameter corresponding to the human body frame.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an image segmentation apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any image segmentation method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are not repeated.
Fig. 4 illustrates a block diagram of an image segmentation apparatus according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus 40 includes:
a first obtaining module 41, configured to obtain a movement parameter corresponding to each human body frame in a first global image, where the movement parameter is used to represent a movement rule or a movement trend corresponding to the human body frame;
a scaling module 42, configured to scale, for each human body frame, the human body frame according to the movement parameter acquired by the first acquisition module 41, so as to obtain a target frame corresponding to the human body frame;
a first determining module 43, configured to determine a first target image according to the target frame corresponding to each human body frame obtained by scaling by the scaling module 42 and the first global image;
a first segmentation module 44, configured to perform image segmentation on the first target image determined by the first determination module 43 to obtain a human body segmentation result of the first global image.
In the embodiment of the disclosure, the size of the target image can be adjusted in real time according to the movement rule or the movement trend of the human body, so that the target image can timely follow the movement of the human body, the probability that the portrait leaves the target image and the probability that the pixel proportion occupied by the portrait in the target image is too small are reduced in the movement process of the human body, and the image segmentation effect when the human body moves is effectively improved.
In a possible implementation manner, the first obtaining module is further configured to:
performing target detection on the first global image to obtain a target detection result of the first global image, wherein the target detection result is used for indicating the position of a human body frame included in the first global image;
determining scene information of the first global image and distance information corresponding to each human body frame in the first global image according to a target detection result of the first global image, wherein the scene information is used for indicating whether the first global image is a global image in a single-person scene or a global image in a multi-person scene, the distance information is used for indicating a distance between a human body in the human body frame and first image acquisition equipment, and the first image acquisition equipment is used for acquiring the first global image;
and aiming at each human body frame, obtaining the movement parameters corresponding to the human body frame according to the scene information, the distance information corresponding to the human body frame and a first preset mapping relation, wherein the first preset mapping relation is used for indicating the movement parameters corresponding to the human body frames with different distances in different scenes.
In one possible implementation, the apparatus further includes:
the second acquisition module is used for acquiring a first video, wherein the first video corresponds to a first scene, the first scene is a single scene or a multi-person scene, and the first video is used for recording the movement condition of the single person or the multi-person within a first movement range, the first movement range is a first distance away from a second image acquisition device, and the second image acquisition device is used for acquiring the first video;
the tracking module is used for carrying out limb tracking on a target person in the first video to obtain the position of a human body frame corresponding to the target person in each frame of image of the first video;
a second determining module, configured to determine, according to a position of a human frame corresponding to the target person in each frame of image of the first video, a moving speed, a moving amplitude, and a second distance of the target person, where the second distance is used to indicate a distance between the target person and a reference position of the first video;
the third acquisition module is used for acquiring a first movement parameter according to the movement speed and the movement amplitude of the target person;
and the establishing module is used for establishing the first preset mapping relation based on the first scene, the second distance and the first moving parameter.
In one possible implementation, the apparatus further includes:
the first setting module is used for setting that the movement frequency of a person in a first video is greater than a first frequency threshold value and the length of the projection of the first movement range in the x-axis direction of the camera coordinate system of the second image acquisition device is greater than a first movement threshold value under the condition that the first scene is a single scene;
the second setting module is used for setting the moving frequency of people in the first video to be less than or equal to a second frequency threshold value and the length of the projection of the first moving range in the x-axis direction of the camera coordinate system of the second image acquisition device to be less than or equal to the second moving threshold value under the condition that the first scene is a multi-person scene;
wherein the second frequency threshold is less than or equal to the first frequency threshold, and the second movement threshold is less than or equal to the first movement threshold.
In one possible implementation, the scaling module is further configured to:
determining a scaling coefficient of the human body frame according to the movement parameter corresponding to the human body frame and a second preset mapping relation;
and zooming the human body frame according to the zooming coefficient to obtain a target frame corresponding to the human body frame, wherein the second preset mapping relation is used for indicating the zooming coefficients corresponding to different movement parameters.
In one possible implementation manner, the first determining module is further configured to:
merging the target frames corresponding to the human body frames to obtain merged frames;
obtaining the first target image according to the merging frame and the first global image;
the first target image corresponds to a merging frame with the smallest area in merging frames capable of covering target frames corresponding to all human body frames.
In one possible implementation, the first segmentation module is further configured to:
acquiring a second target image corresponding to a second global image, wherein the second global image is a previous frame image of the first global image in the video, and the second target image represents a target image adopted when a human body segmentation result of the second global image is acquired;
determining a magnitude of movement of the first target image relative to the second target image;
and under the condition that the movement amplitude of the first target image relative to the second target image is larger than a first amplitude threshold value, carrying out image segmentation on the first target image to obtain a human body segmentation result of the first global image.
In one possible implementation, the apparatus further includes:
and the second segmentation module is used for performing image segmentation by adopting the second target image under the condition that the movement amplitude of the first target image relative to the second target image is less than or equal to the first amplitude threshold value, so as to obtain a human body segmentation result of the first global image.
In one possible implementation manner, the first segmentation module is further configured to:
acquiring a second target image corresponding to a second global image, wherein the second global image and the first global image belong to the same video, the second global image is a previous frame image of the first global image, and the second target image represents a target image adopted when a human body segmentation result of the second global image is acquired;
determining a coverage of the first target image relative to the second target image;
and under the condition that the coverage rate is smaller than a second amplitude threshold value, carrying out image segmentation on the first target image to obtain a human body segmentation result of the first global image.
In one possible implementation, the apparatus further includes:
and the third segmentation module is used for performing image segmentation by adopting the second target image under the condition that the coverage rate is greater than or equal to the second amplitude threshold value to obtain a human body segmentation result of the first global image.
The method has specific technical relevance with the internal structure of the computer system, and can solve the technical problems of how to improve the hardware operation efficiency or the execution effect (including reducing data storage capacity, reducing data transmission capacity, improving hardware processing speed and the like), thereby obtaining the technical effect of improving the internal performance of the computer system according with the natural law.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 5 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or other terminal device.
Referring to fig. 5, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (Wi-Fi), a second generation mobile communication technology (2G), a third generation mobile communication technology (3G), a fourth generation mobile communication technology (4G), a long term evolution of universal mobile communication technology (LTE), a fifth generation mobile communication technology (5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
Fig. 6 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server or terminal device. Referring to fig. 6, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer-readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
The foregoing description of the various embodiments is intended to highlight different aspects of the various embodiments that are the same or similar, which can be referenced with one another and therefore are not repeated herein for brevity.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
If the technical scheme of the application relates to personal information, a product applying the technical scheme of the application clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme of the application relates to sensitive personal information, a product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. A method of image segmentation, the method comprising:
acquiring a movement parameter corresponding to each human body frame in a first global image, wherein the movement parameter is used for representing a movement rule or a movement trend corresponding to the human body frame;
zooming the human body frame according to the movement parameters aiming at each human body frame to obtain a target frame corresponding to the human body frame;
determining a first target image according to the target frame corresponding to each human body frame and the first global image;
and carrying out image segmentation on the first target image to obtain a human body segmentation result of the first global image.
2. The method according to claim 1, wherein the obtaining of the movement parameter corresponding to each human body frame in the first global image comprises:
performing target detection on the first global image to obtain a target detection result of the first global image, wherein the target detection result is used for indicating the position of a human body frame included in the first global image;
determining scene information of the first global image and distance information corresponding to each human body frame in the first global image according to a target detection result of the first global image, wherein the scene information is used for indicating whether the first global image is a global image in a single-person scene or a global image in a multi-person scene, the distance information is used for indicating a distance between a human body in the human body frame and first image acquisition equipment, and the first image acquisition equipment is used for acquiring the first global image;
and for each human body frame, obtaining the movement parameters corresponding to the human body frame according to the scene information, the distance information corresponding to the human body frame and a first preset mapping relation, wherein the first preset mapping relation is used for indicating the movement parameters corresponding to the human body frames with different distances in different scenes.
3. The method of claim 2, further comprising:
the method comprises the steps of obtaining a first video, wherein the first video corresponds to a first scene, the first scene is a single scene or a multi-person scene, the first video is used for recording the movement condition of a single person or a plurality of persons in a first movement range, the first movement range is a first distance away from a second image acquisition device, and the second image acquisition device is used for acquiring the first video;
performing limb tracking on a target person in the first video to obtain the position of a human body frame corresponding to the target person in each frame of image of the first video;
determining the moving speed, the moving amplitude and a second distance of the target person according to the position of a human body frame corresponding to the target person in each frame of image of the first video, wherein the second distance is used for indicating the distance between the target person and the reference position of the first video;
obtaining a first movement parameter according to the movement speed and the movement amplitude of the target person;
and establishing the first preset mapping relation based on the first scene, the second distance and the first movement parameter.
4. The method of claim 3, further comprising:
setting the moving frequency of a person in a first video to be greater than a first frequency threshold value and the length of the projection of the first moving range in the x-axis direction of the camera coordinate system of the second image acquisition device to be greater than a first moving threshold value under the condition that the first scene is a single scene;
setting the moving frequency of people in a first video to be less than or equal to a second frequency threshold value and the length of the projection of the first moving range in the x-axis direction of the camera coordinate system of the second image acquisition equipment to be less than or equal to the second moving threshold value under the condition that the first scene is a multi-person scene;
wherein the second frequency threshold is less than or equal to the first frequency threshold, and the second movement threshold is less than or equal to the first movement threshold.
5. The method according to any one of claims 1 to 4, wherein the scaling the human body frame according to the movement parameter corresponding to the human body frame to obtain the target frame corresponding to the human body frame includes:
determining a scaling coefficient of the human body frame according to the movement parameter corresponding to the human body frame and a second preset mapping relation;
and zooming the human body frame according to the zooming coefficient to obtain a target frame corresponding to the human body frame, wherein the second preset mapping relation is used for indicating the zooming coefficients corresponding to different movement parameters.
6. The method according to any one of claims 1 to 5, wherein the determining a first target image according to the target frame corresponding to each human body frame and the first global image comprises:
merging the target frames corresponding to the human body frames to obtain merged frames;
obtaining the first target image according to the merging frame and the first global image;
the first target image corresponds to a merging frame with the smallest area in merging frames capable of covering target frames corresponding to all human body frames.
7. The method according to any one of claims 1 to 6, wherein the image segmentation on the first target image to obtain the human body segmentation result of the first global image comprises:
acquiring a second target image corresponding to a second global image, wherein the second global image is a previous frame image of the first global image in the video, and the second target image represents a target image adopted when a human body segmentation result of the second global image is acquired;
determining a magnitude of movement of the first target image relative to the second target image;
and under the condition that the movement amplitude of the first target image relative to the second target image is larger than a first amplitude threshold value, carrying out image segmentation on the first target image to obtain a human body segmentation result of the first global image.
8. The method of claim 7, further comprising:
and under the condition that the movement amplitude of the first target image relative to the second target image is smaller than or equal to the first amplitude threshold value, performing image segmentation by using the second target image to obtain a human body segmentation result of the first global image.
9. The method according to any one of claims 1 to 6, wherein the image segmentation on the first target image to obtain the human body segmentation result of the first global image comprises:
acquiring a second target image corresponding to a second global image, wherein the second global image and the first global image belong to the same video, the second global image is a previous frame image of the first global image, and the second target image represents a target image adopted when a human body segmentation result of the second global image is acquired;
determining a coverage of the first target image relative to the second target image;
and under the condition that the coverage rate is smaller than a second amplitude threshold value, carrying out image segmentation on the first target image to obtain a human body segmentation result of the first global image.
10. The method of claim 9, further comprising:
and under the condition that the coverage rate is greater than or equal to the second amplitude threshold value, performing image segmentation by using the second target image to obtain a human body segmentation result of the first global image.
11. An image segmentation apparatus, characterized in that the apparatus comprises:
the first obtaining module is used for obtaining a movement parameter corresponding to each human body frame in a first global image, and the movement parameter is used for representing a movement rule or a movement trend corresponding to the human body frame;
the zooming module is used for zooming the human body frames according to the movement parameters acquired by the first acquisition module aiming at each human body frame to obtain target frames corresponding to the human body frames;
the first determining module is used for determining a first target image according to the target frame corresponding to each human body frame obtained by zooming of the zooming module and the first global image;
and the first segmentation module is used for carrying out image segmentation on the first target image determined by the first determination module to obtain a human body segmentation result of the first global image.
12. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 10.
13. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 10.
CN202210322862.0A 2022-03-29 2022-03-29 Image segmentation method and device, electronic equipment and storage medium Pending CN114638817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210322862.0A CN114638817A (en) 2022-03-29 2022-03-29 Image segmentation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210322862.0A CN114638817A (en) 2022-03-29 2022-03-29 Image segmentation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114638817A true CN114638817A (en) 2022-06-17

Family

ID=81951284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210322862.0A Pending CN114638817A (en) 2022-03-29 2022-03-29 Image segmentation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114638817A (en)

Similar Documents

Publication Publication Date Title
CN110674719B (en) Target object matching method and device, electronic equipment and storage medium
CN109087238B (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
US11288531B2 (en) Image processing method and apparatus, electronic device, and storage medium
CN109840939B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and storage medium
CN110853095B (en) Camera positioning method and device, electronic equipment and storage medium
CN109840917B (en) Image processing method and device and network training method and device
CN111401230B (en) Gesture estimation method and device, electronic equipment and storage medium
CN111666917A (en) Attitude detection and video processing method and device, electronic equipment and storage medium
CN112991381B (en) Image processing method and device, electronic equipment and storage medium
CN114445562A (en) Three-dimensional reconstruction method and device, electronic device and storage medium
CN112767288A (en) Image processing method and device, electronic equipment and storage medium
CN114387445A (en) Object key point identification method and device, electronic equipment and storage medium
CN108171222B (en) Real-time video classification method and device based on multi-stream neural network
CN112184787A (en) Image registration method and device, electronic equipment and storage medium
CN110807769B (en) Image display control method and device
CN114067085A (en) Virtual object display method and device, electronic equipment and storage medium
CN113822798B (en) Method and device for training generation countermeasure network, electronic equipment and storage medium
CN114581525A (en) Attitude determination method and apparatus, electronic device, and storage medium
CN114445753A (en) Face tracking recognition method and device, electronic equipment and storage medium
CN112613447B (en) Key point detection method and device, electronic equipment and storage medium
CN113345000A (en) Depth detection method and device, electronic equipment and storage medium
CN113344999A (en) Depth detection method and device, electronic equipment and storage medium
CN112767541A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN114638949A (en) Virtual object display method and device, electronic equipment and storage medium
CN114565962A (en) Face image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination