CN108337427B - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN108337427B
CN108337427B CN201810048346.7A CN201810048346A CN108337427B CN 108337427 B CN108337427 B CN 108337427B CN 201810048346 A CN201810048346 A CN 201810048346A CN 108337427 B CN108337427 B CN 108337427B
Authority
CN
China
Prior art keywords
image
target object
guide frame
contour
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810048346.7A
Other languages
Chinese (zh)
Other versions
CN108337427A (en
Inventor
董培
徐宗
王东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201810048346.7A priority Critical patent/CN108337427B/en
Publication of CN108337427A publication Critical patent/CN108337427A/en
Application granted granted Critical
Publication of CN108337427B publication Critical patent/CN108337427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method and electronic equipment, wherein the method comprises the following steps: acquiring a first image of a target object; acquiring contour information of the target object based on the first image; generating a first guide frame matched with the contour of the target object based on the acquired contour information; the first guide frame is used for guiding the target object to perform posture adjustment during image acquisition; and displaying the first guide frame.

Description

Image processing method and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and an electronic device.
Background
With the integration of self-photography in user daily life and social activities, incremental services provided based on capturing user portraits are increasing, for example, self-photography of identification photographs, combining diversified background templates or Augmented Reality (AR) technology with portraits to generate new photographs of different backgrounds or decorations. However, when a user takes a portrait, since there is no guidance information for adjusting the position of the user, the user can only take a photograph by using his/her own photographing experience, so that the position of the collected portrait of the user is often not ideal, and therefore, providing an image processing scheme capable of providing position guidance when the user takes a self-portrait becomes an urgent problem to be solved.
Disclosure of Invention
The embodiment of the invention provides an image processing method and electronic equipment, which can generate a guide frame for guiding a user to perform posture adjustment during image acquisition, and improve shooting experience of the user.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides an image processing method, which comprises the following steps:
acquiring a first image of a target object;
acquiring contour information of the target object based on the first image;
generating a first guide frame matched with the contour of the target object based on the acquired contour information; the first guide frame is used for guiding the target object to perform posture adjustment during image acquisition;
and displaying the first guide frame.
In the foregoing solution, before the obtaining the first image of the target object, the method further includes:
in response to the electronic device being in an image capture preview state, determining a guide position for presenting a second guide frame; the second guide frame is a guide frame with a preset fixed size;
displaying the second guide frame at the determined guide position.
In the above scheme, the method further comprises:
acquiring a second image of the target object;
determining that the outline of the target object imaged in the second image matches the displayed first guide box;
presenting the first guide frame in the second image with a specific display effect.
In the above scheme, the method further comprises:
receiving a confirmation instruction corresponding to the second image, or determining that a preset time condition is met;
acquiring image information of the target object in the second image within the first guide frame.
In the above scheme, the method further comprises:
performing graph cutting processing on the second image to obtain an image of the target object in the first guide frame;
acquiring a background image for synthesizing a target image;
and synthesizing the target image by taking the acquired background image as background information and taking the image of the target object in the first guide frame as foreground information.
In the foregoing solution, the acquiring the contour information of the target object includes:
performing face detection on a first acquired image of the target object to locate a face region of the target object in response to the target object being a human body object;
based on the determined face region of the target object, positioning the shoulder contour of the target object to obtain contour information of the target object.
In the foregoing solution, the acquiring the contour information of the target object includes:
acquiring depth information of the target object;
and calculating to obtain the contour information of the target object based on the depth information of the target object.
In the foregoing solution, the acquiring a first image of a target object includes:
collecting a plurality of third images of the target object in response to the change of the image collecting posture to obtain an image sequence of the target image;
at least one image is selected from the sequence of images as the first image.
In the foregoing solution, the displaying the first guide frame includes:
in response to the first image being an image selected from the sequence of images of the target object,
determining images in the image sequence, which have an association relation with the first image, according to the time sequence of image acquisition of the target object;
displaying the first guide frame in an image having an association relationship with the first image.
In the foregoing solution, the displaying the first guide frame includes:
acquiring the size of a display area for displaying an image;
determining a first guide frame area in the display area for displaying the first guide frame based on the acquired size of the display area;
displaying the first guide frame in the determined first guide frame area.
An embodiment of the present invention further provides an electronic device, where the electronic device includes:
a memory for storing an executable program;
a processor for implementing, by executing the executable program stored in the memory:
acquiring a first image of a target object;
acquiring contour information of the target object based on the first image;
generating a first guide frame matched with the contour of the target object based on the acquired contour information; the first guide frame is used for guiding the target object to perform posture adjustment during image acquisition;
and displaying the first guide frame.
The image processing method and the electronic equipment provided by the embodiment of the invention have the following beneficial effects:
when the target object is subjected to image acquisition, the guide frame matched with the target object is generated through the acquired contour information of the target object, so that the target object can be guided to perform posture adjustment when the target object is subjected to image acquisition, the shooting experience of a user is improved, and the image acquisition quality of the user can be improved.
Drawings
Fig. 1 is a schematic flow chart of an alternative image processing method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a first guiding box displaying a contour match with a target object according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a request for image confirmation from a user according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of an alternative image processing method according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a fixed-size guide frame according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a user's identification photo obtained after target image synthesis is performed according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device as a hardware entity according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be provided with reference to the accompanying drawings, and it should be understood that the embodiments provided herein are only for explaining the present invention and are not intended to limit the present invention. In addition, the following embodiments are provided as some embodiments for implementing the invention, not all embodiments for implementing the invention, and those skilled in the art will not make creative efforts to recombine technical solutions of the following embodiments and other embodiments based on implementing the invention all belong to the protection scope of the invention.
It should be noted that, in the embodiments of the present invention, the terms "comprises", "comprising" or any other variation thereof are intended to cover a non-exclusive inclusion, so that a method or apparatus including a series of elements includes not only the explicitly recited elements but also other elements not explicitly listed or inherent to the method or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of additional related elements (e.g., steps in a method) in a method or apparatus that comprises the element.
It should be noted that the terms "first \ second \ third" related to the embodiments of the present invention only distinguish similar objects, and do not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may exchange a specific order or sequence when allowed. It should be understood that the terms first, second, and third, as used herein, are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or otherwise described herein.
Fig. 1 is a schematic flow chart of an alternative image processing method according to an embodiment of the present invention, and as shown in fig. 1, the image processing method according to the embodiment of the present invention is applied to an electronic device, and relates to steps 101 to 104, which are described below.
Step 101: a first image of a target object is acquired.
In an embodiment, the image processing method according to the embodiment of the present invention is executed when the electronic device detects that the electronic device is in an image capturing state in a specific mode, for example, the electronic device detects that the electronic device is currently in a self-timer mode, and executes the operation of step 101.
In an embodiment, before acquiring the first image of the target object, the method further comprises:
in response to the electronic device being in an image capture preview state, determining a guide position for presenting a second guide frame; the second guide frame is a guide frame with a preset fixed size; displaying the second guide frame at the determined guide position. That is, when the electronic device is in the image capture preview state, before the first guide frame matching the contour of the target object is displayed, a second guide frame of a fixed size may also be presented at a preset fixed position (i.e., a guide position) for preliminary position guidance of image capture of the target object.
In an embodiment, the target object may be a user, and the electronic device may acquire the first image of the target object by: collecting a plurality of third images of the target object in response to the change of the image collecting posture to obtain an image sequence of the target image; at least one image is selected from the sequence of images as the first image.
Here, in actual implementation, the cause of the change in the image capturing posture may be a posture (position) change of the target object, such as a head twist of the user, an expression change, or the like; or a change in the image capturing posture caused by a spatial position change of the electronic apparatus itself.
Step 102: and acquiring the contour information of the target object based on the first image.
The first image can be an image selected from the image sequence, and the contour information of the target object is obtained by performing image analysis according to the selected image; the contour information of the target object can also be obtained for a plurality of images selected from the obtained image sequence based on the selected plurality of images, for example, the selected plurality of images are input into a machine model obtained by pre-training to obtain the contour information of the target object; the selection of the plurality of images may be random or according to a predetermined rule (for example, 20 adjacent images in the image sequence are selected).
In an embodiment, when the number of the first images is one, the contour information of the target object may be acquired by: performing face detection on a first acquired image of the target object to locate a face region of the target object in response to the target object being a human body object; based on the determined face region of the target object, positioning the shoulder contour of the target object to obtain contour information of the target object. In practical implementation, after the face area of the user is located through face detection, the shoulder position of the user can be further determined through a preset head-shoulder proportional relation, and further approximate contour information of the target object is obtained.
In an embodiment, when the number of the first images is one, the contour information of the target object may be further acquired by: performing face detection on a first acquired image of the target object to locate a face region of the target object in response to the target object being a human body object; acquiring red, green and blue (RGB) information in the first image; determining the head and neck contours of the target object based on the acquired RGB information and the positioned face area of the target object; performing image segmentation processing on the first image based on the determined head and neck contours of the target object to obtain a trunk contour of the target object in the first image; and obtaining contour information of the target object based on the recognition result of the head contour, the neck contour and the trunk contour of the target object. In practical implementation, the face region of the user is obtained through face detection, the color information of each part in the first image can be known through RGB detection, the RGB value of the face of the user can be further known, the neck of the user can be positioned according to the RGB value of the face, that is, the skin of the user, after the contours of the head and the neck of the user are determined, the trunk contour of the user in the first image can be further obtained through a Graph Cut (Graph Cut) technology, and finally the contour information of the whole user is obtained by combining each processing result.
In an embodiment, the contour information of the target object may also be obtained by: acquiring depth information of the target object; and calculating to obtain the contour information of the target object based on the depth information of the target object. That is to say, the image information acquisition can be carried out on the user through the depth camera, and the profile information of the user is obtained through the obtained depth information.
Step 103: generating a first guide frame matched with the contour of the target object based on the acquired contour information; the first guide frame is used for guiding the target object to perform posture adjustment during image acquisition.
In practical application, a first guide frame matched with the contour of a target object is generated based on the acquired contour information of the target object, fig. 2 is a schematic diagram showing the first guide frame matched with the contour of the target object according to the embodiment of the present invention, and in fig. 2, reference numeral 21 indicates the first guide frame matched with the contour of a user, that is, the first guide frame displayed in a screen of an electronic device is matched with the contour of a user image in the screen, so that a user can give accurate position guidance when taking a self-timer.
Step 104: and displaying the first guide frame.
In an embodiment, displaying the first guide frame may include: acquiring the size of a display area for displaying an image; determining a first guide frame area in the display area for displaying the first guide frame based on the acquired size of the display area; displaying the first guide frame in the determined first guide frame area. That is, in practical applications, a specific guide frame area for displaying the first guide frame may be set, and the first guide frame may be displayed in the specific guide frame area, so as to guide the user to obtain the user image of the optimal position.
In an embodiment, when the first image is an image selected from the image sequence of the target object, the first guide frame may be displayed by: determining images in the image sequence, which have an association relation with the first image, according to the time sequence of image acquisition of the target object; displaying the first guide frame in an image having an association relationship with the first image. Here, the image associated with the first image may be a next image adjacent to the first image, or an image separated from the first image by a fixed number of images in the image sequence.
In an embodiment, the method further comprises: acquiring a second image of the target object; determining that the outline of the target object imaged in the second image matches the displayed first guide box; presenting the first guide frame in the second image with a specific display effect. That is, after the first guide frame is displayed, the user changes the position of the user or the electronic device according to the guidance of the guide frame, so that the image of the user is located in the first guide frame, and at this time, the first guide frame can be displayed with a specific display effect (such as flashing, color changing, highlighting, and the like) to prompt the user that the current shooting position is a proper position, and image shooting can be performed.
In one embodiment, when the electronic device determines that the outline of the target object imaged in the second image matches the first guide frame displayed, a shooting request may be sent to a user, fig. 3 is a schematic diagram of requesting the user to confirm an image according to an embodiment of the present invention, as shown in fig. 3, reference numeral 32 is that when the outline of the target object imaged in the second image matches with the first guide frame displayed, a first guide box, reference numeral 31, presented with a specific display effect, is a "ok" key displayed on the screen, the user clicks the "ok" key, namely, a confirmation instruction is sent to the electronic equipment, after the electronic equipment receives the confirmation instruction corresponding to the second image, and acquiring the image information of the target object in the first guide frame in the second image, and further performing target image synthesis by using the acquired image information, such as generation of a certificate photo and the like. In practical applications, if the user does not click the "ok" key within a set time (e.g., one minute), the electronic device may automatically acquire the image information of the target object in the first guide frame in the second image after the set time is reached.
In another embodiment, after the electronic device determines that the outline of the target object imaged in the second image matches the displayed first guide frame, it is determined whether a preset time condition is met, if so, the preset time is reached, and if the preset time is reached, the image information of the target object in the first guide frame in the second image is automatically acquired.
In one embodiment, after the image information of the target object in the first guide frame is acquired, the acquired image may be displayed, and a "confirm" and "delete" request may be sent to allow the user to confirm whether the acquired image is retained.
By applying the embodiment of the invention, when the user adopts the electronic equipment to shoot, the electronic equipment can generate and display the guide frame matched with the user profile based on the acquired profile information of the user, guide the user to adjust the shooting posture of the user or the space position of the electronic equipment, and when the image of the user enters the guide frame, the guide frame is displayed with a specific display effect to prompt the user that the current shooting posture is the proper shooting posture for further shooting, so that the shooting experience of the user is improved, and simultaneously, the high-quality user image can be obtained.
Fig. 4 is a schematic flow chart of an alternative of the image processing method according to the embodiment of the present invention, where the image processing method according to the embodiment of the present invention is applied to an electronic device, and as shown in fig. 4, the image processing method according to the embodiment of the present invention includes:
step 201: and displaying the second guide frame in response to the electronic equipment being in the image acquisition preview state.
Here, in practical implementation, fig. 5 is a schematic diagram of a display of a fixed-size guide frame provided by an embodiment of the present invention; referring to fig. 5, reference numeral 51 indicates a second guide frame, which may be a fixed-size guide frame, and the shape of the second guide frame may be set according to actual needs, such as a square shape, a circular shape, and the like, for guiding a user to perform a preliminary posture adjustment, and a guide position presenting the second guide frame may be a preset position in a display area of the electronic device, and specifically, the second guide frame may be displayed in the following manner: determining a guidance position for presenting the second guidance box; the second guide frame is a guide frame with a preset fixed size; displaying the second guide frame at the determined guide position. In this way, the user can perform preliminary posture adjustment according to the displayed second guide frame.
Step 202: in response to a change in the image acquisition pose, a sequence of images of the target object is acquired.
In practical application, the target object may be a user who performs image shooting, and when the user performs image shooting (self-timer shooting) using the electronic device, the electronic device performs continuous image acquisition on the user to obtain a plurality of images of the user, so as to form an image sequence of the user.
Step 203: and obtaining the contour information of the target object based on the image sequence of the target object.
In one embodiment, the obtained image sequence of the user can be input into a machine learning model obtained by pre-training, and the contour information of the user is output; of course, it is also possible to select images of a plurality of users from the obtained image sequence, input the selected images of the plurality of users into the machine learning model, and output the contour information of the user.
In another embodiment, the profile information of the user can also be obtained by: selecting (such as randomly selecting) an image from the obtained image sequence as a first image, and carrying out face detection on the first image of the user so as to position the face area of the user; based on the determined face region of the user, the shoulder contour of the user is located to obtain contour information of the user. In practical implementation, after the face area of the user is located through face detection, the shoulder position of the user can be further determined through a preset head-shoulder proportional relation, and further approximate contour information of the target object is obtained.
In an embodiment, the contour information of the target object may also be obtained by: : selecting (such as randomly selecting) an image from the obtained image sequence as a first image, and carrying out face detection on the first image so as to position the face area of the user; acquiring RGB information in the first image; determining the head and neck contours of the user based on the obtained RGB information and the positioned face area of the user; performing image segmentation processing on the first image based on the determined head and neck contours of the user to obtain a trunk contour of the user in the first image; and obtaining the contour information of the user based on the recognition result of the head contour, the neck contour and the trunk contour of the user. In practical implementation, firstly, a face region of the user is obtained through face detection, color information of each part in the first image can be known through RGB detection, and then RGB values of the face of the user can be known, and then the neck of the user can be positioned according to the RGB values of the face, i.e., the skin of the user, after the contours of the head and the neck of the user are determined, the trunk contour of the user in the first image can be further obtained through a graph cutting technology, and finally, the contour information of the whole user is obtained by combining each processing result.
Step 204: based on the acquired outline information, a first guide frame matching the outline of the target object is generated and displayed.
Here, the electronic device may obtain the contour display of the user in the display area based on the contour information after acquiring the contour information of the user, and further generate a first guide frame matched with the contour of the user in the display area to guide the user to perform posture adjustment in image acquisition, such as giving an accurate position guide in self-photographing by the user.
In one embodiment, the first guide frame may be displayed in a specific display area, for example: acquiring the size of a display area for displaying an image; determining a first guide frame area in the display area for displaying the first guide frame based on the acquired size of the display area; displaying the first guide frame in the determined first guide frame area.
Step 205: acquiring a second image of the target object, judging whether a preset display condition is met, and if so, executing a step 206; if not, step 205 is performed.
Here, in practical implementation, after the first guide frame is displayed, image capturing (second image) is continued for the user, and whether the preset display condition is satisfied is determined based on the captured image, and in an embodiment, the image capturing may be: judging that the outline of the target object imaged in the second image is matched with the displayed first guide frame; that is, whether the image of the user in the display area of the electronic device is located in the first guide frame is judged, if the outline of the image in the second image is matched with the displayed first guide frame, that is, the image of the user in the display area of the electronic device is located in the first guide frame, it is determined that the preset display condition is met, otherwise, it is determined that the preset display condition is not met, and the image acquisition of the user is continued.
Step 206: presenting the first guide frame in the second image with a specific display effect.
In practical applications, the specific display effect may be a display effect that can be highlighted or noticed by the user, such as flashing, color changing, highlighting, etc., to prompt the user that the current image is already located in the guide frame, i.e., the current shooting position is optimal, and image shooting can be performed.
Step 207: and performing image segmentation processing on the second image to obtain an image of the target object in the first guide frame.
Here, in an embodiment, before the second image is subjected to graph cutting, a confirmation request may be further sent to the user, so that the user confirms that the current user image meets the requirements of the user, for example, a "confirm" button is presented, and after the user clicks the "confirm" button, the electronic device receives the confirmation instruction, and further performs graph cutting processing on the second image, so as to obtain the image of the user in the first guide frame.
Step 208: and acquiring a background image for synthesizing the target image, and synthesizing the target image by taking the acquired background image as background information and taking the image of the target object in the first guide frame as foreground information.
Here, after obtaining the image information of the user based on the generated guide frame, the target image may be further synthesized, and in an embodiment, a background image library may be preset, a background image meeting the requirement may be selected from the background image library, and then the target image may be synthesized with the obtained user image as a foreground. Referring to fig. 6, fig. 6 is a schematic diagram of the user's identification photograph obtained after the target image synthesis is performed, in fig. 6, reference numeral 61 indicates a background image of a specific color selected from a background image library, and reference numeral 62 indicates an image of the user obtained by image segmentation displayed as a foreground.
By applying the embodiment of the invention, when a user shoots by adopting the electronic equipment, the electronic equipment can generate and display the guide frame matched with the user profile based on the acquired profile information of the user, guide the user to adjust the shooting posture of the user or the space position of the electronic equipment, when the image of the user enters the guide frame, the guide frame displays with a specific display effect to prompt the user that the current shooting posture is the proper shooting posture, further shoot, then obtain the image of the user in the guide frame through image segmentation processing, acquire the background image for target image synthesis, and synthesize the target image by taking the user image as the foreground image, so that the shooting experience of the user is improved, and the quality of the synthesized image is improved.
Fig. 7 is an example of an electronic device provided as a hardware entity according to an embodiment of the present invention, and as shown in fig. 7, the electronic device includes a processor 71, a memory 72, and at least one external communication interface 73; the processor 71, the memory 72 and the external communication interface 73 are all connected through a bus 74; wherein the content of the first and second substances,
a memory 72 for storing an executable program 721;
a processor 71, configured to implement, by executing the executable program stored in the memory 72:
acquiring a first image of a target object;
acquiring contour information of the target object based on the first image;
generating a first guide frame matched with the contour of the target object based on the acquired contour information; the first guide frame is used for guiding the target object to perform posture adjustment during image acquisition;
and displaying the first guide frame.
The processor 71 is further configured to implement, by executing the executable program stored in the memory:
in response to the electronic device being in an image capture preview state, determining a guide position for presenting a second guide frame; the second guide frame is a guide frame with a preset fixed size;
displaying the second guide frame at the determined guide position.
The processor 71 is further configured to implement, by executing the executable program stored in the memory:
acquiring a second image of the target object;
determining that the outline of the target object imaged in the second image matches the displayed first guide box;
presenting the first guide frame in the second image with a specific display effect.
The processor 71 is further configured to implement, by executing the executable program stored in the memory:
receiving a confirmation instruction corresponding to the second image, or determining that a preset time condition is met;
acquiring image information of the target object in the second image within the first guide frame.
The processor 71 is further configured to implement, by executing the executable program stored in the memory:
performing graph cutting processing on the second image to obtain an image of the target object in the first guide frame;
acquiring a background image for synthesizing a target image;
and synthesizing the target image by taking the acquired background image as background information and taking the image of the target object in the first guide frame as foreground information.
The processor 71 is further configured to implement, by executing the executable program stored in the memory:
performing face detection on a first acquired image of the target object to locate a face region of the target object in response to the target object being a human body object;
based on the determined face region of the target object, positioning the shoulder contour of the target object to obtain contour information of the target object.
The processor 71 is further configured to implement, by executing the executable program stored in the memory:
acquiring depth information of the target object;
and calculating to obtain the contour information of the target object based on the depth information of the target object.
The processor 71 is further configured to implement, by executing the executable program stored in the memory:
collecting a plurality of third images of the target object in response to the change of the image collecting posture to obtain an image sequence of the target image;
at least one image is selected from the sequence of images as the first image.
The processor 71 is further configured to implement, by executing the executable program stored in the memory:
in response to the first image being an image selected from the sequence of images of the target object,
determining images in the image sequence, which have an association relation with the first image, according to the time sequence of image acquisition of the target object;
displaying the first guide frame in an image having an association relationship with the first image.
The processor 71 is further configured to implement, by executing the executable program stored in the memory:
acquiring the size of a display area for displaying an image;
determining a first guide frame area in the display area for displaying the first guide frame based on the acquired size of the display area;
displaying the first guide frame in the determined first guide frame area.
It should be noted that: the electronic device and the image processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again. For technical details not disclosed in the embodiments of the electronic device of the present invention, refer to the description of the embodiments of the method of the present invention.
Embodiments of the present invention also provide a storage medium having computer instructions stored thereon, where the instructions, when executed by a processor, implement:
acquiring a first image of a target object;
acquiring contour information of the target object based on the first image;
generating a first guide frame matched with the contour of the target object based on the acquired contour information; the first guide frame is used for guiding the target object to perform posture adjustment during image acquisition;
and displaying the first guide frame.
The instructions when executed by the processor further implement:
in response to the electronic device being in an image capture preview state, determining a guide position for presenting a second guide frame; the second guide frame is a guide frame with a preset fixed size;
displaying the second guide frame at the determined guide position.
The instructions when executed by the processor further implement:
acquiring a second image of the target object;
determining that the outline of the target object imaged in the second image matches the displayed first guide box;
presenting the first guide frame in the second image with a specific display effect.
The instructions when executed by the processor further implement:
receiving a confirmation instruction corresponding to the second image, or determining that a preset time condition is met;
acquiring image information of the target object in the second image within the first guide frame.
The instructions when executed by the processor further implement:
performing graph cutting processing on the second image to obtain an image of the target object in the first guide frame;
acquiring a background image for synthesizing a target image;
and synthesizing the target image by taking the acquired background image as background information and taking the image of the target object in the first guide frame as foreground information.
The instructions when executed by the processor further implement:
performing face detection on a first acquired image of the target object to locate a face region of the target object in response to the target object being a human body object;
based on the determined face region of the target object, positioning the shoulder contour of the target object to obtain contour information of the target object.
The instructions when executed by the processor further implement:
acquiring depth information of the target object;
and calculating to obtain the contour information of the target object based on the depth information of the target object.
The instructions when executed by the processor further implement:
collecting a plurality of third images of the target object in response to the change of the image collecting posture to obtain an image sequence of the target image;
at least one image is selected from the sequence of images as the first image.
The instructions when executed by the processor further implement:
in response to the first image being an image selected from the sequence of images of the target object,
determining images in the image sequence, which have an association relation with the first image, according to the time sequence of image acquisition of the target object;
displaying the first guide frame in an image having an association relationship with the first image.
The instructions when executed by the processor further implement:
acquiring the size of a display area for displaying an image;
determining a first guide frame area in the display area for displaying the first guide frame based on the acquired size of the display area;
displaying the first guide frame in the determined first guide frame area.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (14)

1. An image processing method applied to an electronic device, the method comprising:
collecting a plurality of third images of a target object in response to the change of the image collecting posture to obtain an image sequence of the target object;
selecting an image from the image sequence as a first image;
performing face detection on the acquired first image of the target object to locate a face region of the target object in response to the target object being a human body object based on the first image; acquiring red, green and blue (RGB) information in the first image; determining the head and neck contours of the target object based on the acquired RGB information and the positioned face area of the target object; performing image segmentation processing on the first image based on the determined head and neck contours of the target object to obtain a trunk contour of the target object in the first image; obtaining contour information of the target object based on recognition results of the head contour, the neck contour and the trunk contour of the target object;
generating a first guide frame matched with the contour of the target object based on the acquired contour information; the first guide frame is used for guiding the target object to perform posture adjustment during the acquisition of the identification photo image;
and displaying the first guide frame.
2. The method of claim 1, wherein prior to acquiring a plurality of third images of the target object in response to the change in the image acquisition pose resulting in the sequence of images of the target object, the method further comprises:
in response to the electronic device being in an image capture preview state, determining a guide position for presenting a second guide frame; the second guide frame is a guide frame with a preset fixed size;
displaying the second guide frame at the determined guide position.
3. The method of claim 1, wherein the method further comprises:
acquiring a second image of the target object;
determining that the outline of the target object imaged in the second image matches the displayed first guide box;
presenting the first guide frame in the second image with a specific display effect.
4. The method of claim 3, wherein the method further comprises:
performing graph cutting processing on the second image to obtain an image of the target object in the first guide frame;
acquiring a background image for synthesizing a target image;
and synthesizing the target image by taking the acquired background image as background information and taking the image of the target object in the first guide frame as foreground information.
5. The method of claim 1 or 2, wherein the displaying the first guide frame comprises:
in response to the first image being an image selected from the sequence of images of the target object,
determining images in the image sequence, which have an association relation with the first image, according to the time sequence of image acquisition of the target object;
displaying the first guide frame in an image having an association relationship with the first image.
6. The method of claim 1 or 2, wherein the displaying the first guide frame comprises:
acquiring the size of a display area for displaying an image;
determining a first guide frame area in the display area for displaying the first guide frame based on the acquired size of the display area;
displaying the first guide frame in the determined first guide frame area.
7. An image processing method applied to an electronic device, the method comprising:
collecting a plurality of third images of a target object in response to the change of the image collecting posture to obtain an image sequence of the target object;
selecting at least two images from the image sequence as a first image;
predicting the first image by adopting a machine model to obtain the contour information of the target object;
generating a first guide frame matched with the contour of the target object based on the acquired contour information; the first guide frame is used for guiding the target object to perform posture adjustment during the acquisition of the identification photo image;
and displaying the first guide frame.
8. The method of claim 7, wherein prior to acquiring a plurality of third images of the target object in response to the change in the image acquisition pose resulting in the sequence of images of the target object, the method further comprises:
in response to the electronic device being in an image capture preview state, determining a guide position for presenting a second guide frame; the second guide frame is a guide frame with a preset fixed size;
displaying the second guide frame at the determined guide position.
9. The method of claim 7, wherein the method further comprises:
acquiring a second image of the target object;
determining that the outline of the target object imaged in the second image matches the displayed first guide box;
presenting the first guide frame in the second image with a specific display effect.
10. The method of claim 9, wherein the method further comprises:
performing graph cutting processing on the second image to obtain an image of the target object in the first guide frame;
acquiring a background image for synthesizing a target image;
and synthesizing the target image by taking the acquired background image as background information and taking the image of the target object in the first guide frame as foreground information.
11. The method of claim 7 or 8, wherein the displaying the first guide frame comprises:
in response to the first image being an image selected from the sequence of images of the target object,
determining images in the image sequence, which have an association relation with the first image, according to the time sequence of image acquisition of the target object;
displaying the first guide frame in an image having an association relationship with the first image.
12. The method of claim 7 or 8, wherein the displaying the first guide frame comprises:
acquiring the size of a display area for displaying an image;
determining a first guide frame area in the display area for displaying the first guide frame based on the acquired size of the display area;
displaying the first guide frame in the determined first guide frame area.
13. An electronic device, characterized in that the electronic device comprises:
a memory for storing an executable program;
a processor for implementing, by executing the executable program stored in the memory:
collecting a plurality of third images of a target object in response to the change of the image collecting posture to obtain an image sequence of the target object;
selecting an image from the image sequence as a first image;
performing face detection on the acquired first image of the target object to locate a face region of the target object in response to the target object being a human body object based on the first image; acquiring red, green and blue (RGB) information in the first image; determining the head and neck contours of the target object based on the acquired RGB information and the positioned face area of the target object; performing image segmentation processing on the first image based on the determined head and neck contours of the target object to obtain a trunk contour of the target object in the first image; obtaining contour information of the target object based on recognition results of the head contour, the neck contour and the trunk contour of the target object;
generating a first guide frame matched with the contour of the target object based on the acquired contour information; the first guide frame is used for guiding the target object to perform posture adjustment during the acquisition of the identification photo image;
and displaying the first guide frame.
14. An electronic device, characterized in that the electronic device comprises:
a memory for storing an executable program;
a processor for implementing, by executing the executable program stored in the memory:
collecting a plurality of third images of a target object in response to the change of the image collecting posture to obtain an image sequence of the target object;
selecting at least two images from the image sequence as a first image;
predicting the first image by adopting a machine model to obtain the contour information of the target object;
generating a first guide frame matched with the contour of the target object based on the acquired contour information; the first guide frame is used for guiding the target object to perform posture adjustment during the acquisition of the identification photo image;
and displaying the first guide frame.
CN201810048346.7A 2018-01-18 2018-01-18 Image processing method and electronic equipment Active CN108337427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810048346.7A CN108337427B (en) 2018-01-18 2018-01-18 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810048346.7A CN108337427B (en) 2018-01-18 2018-01-18 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN108337427A CN108337427A (en) 2018-07-27
CN108337427B true CN108337427B (en) 2021-05-18

Family

ID=62925358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810048346.7A Active CN108337427B (en) 2018-01-18 2018-01-18 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN108337427B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111045575A (en) * 2018-10-11 2020-04-21 阿里健康信息技术有限公司 Diagnosis and treatment interaction method and diagnosis and treatment terminal equipment
CN113159973A (en) * 2021-02-25 2021-07-23 华夏方圆信用评估有限公司 Intelligent medical insurance fund dynamic supervision method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103391361A (en) * 2013-07-05 2013-11-13 中科创达软件股份有限公司 Automatic reminding method and device for self-timer composition of intelligent terminal
CN103401994A (en) * 2013-07-11 2013-11-20 广东欧珀移动通信有限公司 Method for guiding to photograph and mobile terminal
US9378601B2 (en) * 2012-03-14 2016-06-28 Autoconnect Holdings Llc Providing home automation information via communication with a vehicle
CN105791796A (en) * 2014-12-25 2016-07-20 联想(北京)有限公司 Image processing method and image processing apparatus
CN106603901A (en) * 2015-10-14 2017-04-26 诺基亚技术有限公司 Method and device used for scene matching

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4904243B2 (en) * 2007-10-17 2012-03-28 富士フイルム株式会社 Imaging apparatus and imaging control method
CN101639610B (en) * 2008-08-01 2011-03-23 鸿富锦精密工业(深圳)有限公司 Digital photographic device and self-shoot guidance method
CN201311536Y (en) * 2008-11-10 2009-09-16 赵之瑰 Portable multifunctional identification picture photographing box
CN102375979A (en) * 2010-08-23 2012-03-14 北京汉林信通信息技术有限公司 Intelligent portrait acquisition equipment and method
CN104243787B (en) * 2013-06-06 2017-09-05 华为技术有限公司 Photographic method, photo management method and equipment
CN105447047B (en) * 2014-09-02 2019-03-15 阿里巴巴集团控股有限公司 It establishes template database of taking pictures, the method and device for recommendation information of taking pictures is provided
CN105046246B (en) * 2015-08-31 2018-10-09 广州市幸福网络技术有限公司 It can carry out the license camera and portrait pose detection method of the shooting prompt of portrait posture
CN106791204A (en) * 2017-02-27 2017-05-31 努比亚技术有限公司 Mobile terminal and its image pickup method
CN107580180B (en) * 2017-08-24 2020-07-10 深圳市华盛昌科技实业股份有限公司 Display method, device and equipment of view frame and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9378601B2 (en) * 2012-03-14 2016-06-28 Autoconnect Holdings Llc Providing home automation information via communication with a vehicle
CN103391361A (en) * 2013-07-05 2013-11-13 中科创达软件股份有限公司 Automatic reminding method and device for self-timer composition of intelligent terminal
CN103401994A (en) * 2013-07-11 2013-11-20 广东欧珀移动通信有限公司 Method for guiding to photograph and mobile terminal
CN105791796A (en) * 2014-12-25 2016-07-20 联想(北京)有限公司 Image processing method and image processing apparatus
CN106603901A (en) * 2015-10-14 2017-04-26 诺基亚技术有限公司 Method and device used for scene matching

Also Published As

Publication number Publication date
CN108337427A (en) 2018-07-27

Similar Documents

Publication Publication Date Title
CN108377334B (en) Short video shooting method and device and electronic terminal
CN107730444B (en) Image processing method, image processing device, readable storage medium and computer equipment
CN107945135B (en) Image processing method, image processing apparatus, storage medium, and electronic device
US20170223265A1 (en) Methods and devices for establishing photographing template database and providing photographing recommendation information
JP4218348B2 (en) Imaging device
CN107862653B (en) Image display method, image display device, storage medium and electronic equipment
CN105373929B (en) Method and device for providing photographing recommendation information
US20090251484A1 (en) Avatar for a portable device
CN108810406B (en) Portrait light effect processing method, device, terminal and computer readable storage medium
WO2017016069A1 (en) Photographing method and terminal
CN111597938B (en) Living body detection and model training method and device
CN113194255A (en) Shooting method and device and electronic equipment
US11403789B2 (en) Method and electronic device for processing images
CN107424117B (en) Image beautifying method and device, computer readable storage medium and computer equipment
CN108337427B (en) Image processing method and electronic equipment
CN112036209A (en) Portrait photo processing method and terminal
CN110611768B (en) Multiple exposure photographic method and device
CN106327588B (en) Intelligent terminal and image processing method and device thereof
CN110047115B (en) Star image shooting method and device, computer equipment and storage medium
JP6497030B2 (en) Imaging system, information processing apparatus, imaging method, program, storage medium
CN111800574B (en) Imaging method and device and electronic equipment
CN112714251A (en) Shooting method and shooting terminal
CN112991157A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112702520A (en) Object photo-combination method and device, electronic equipment and computer-readable storage medium
CN110798614A (en) Photo shooting method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant