WO2018040510A1 - 一种图像生成方法、装置及终端设备 - Google Patents

一种图像生成方法、装置及终端设备 Download PDF

Info

Publication number
WO2018040510A1
WO2018040510A1 PCT/CN2017/073511 CN2017073511W WO2018040510A1 WO 2018040510 A1 WO2018040510 A1 WO 2018040510A1 CN 2017073511 W CN2017073511 W CN 2017073511W WO 2018040510 A1 WO2018040510 A1 WO 2018040510A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
captured image
person
imaging area
image
Prior art date
Application number
PCT/CN2017/073511
Other languages
English (en)
French (fr)
Inventor
杜辉天
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018040510A1 publication Critical patent/WO2018040510A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • the present disclosure relates to the field of electronic technologies, and in particular to an image generating method, apparatus, and terminal device.
  • the use of the camera of the terminal device for taking pictures and recording has become a common entertainment mode for users.
  • the camera photographing and recording functions are also more and more, such as self-timer, panoramic shooting, etc., but these functions are quite satisfactory, and can not bring enough interest to the user.
  • the present disclosure provides an image generation method, apparatus, and terminal device to solve the problem that the related art cannot provide interesting taste to the user.
  • an embodiment of the present disclosure provides an image generating method, where the method includes:
  • the captured image data is encoded to be a captured image.
  • the step of acquiring a plurality of face images of different faces of a face in the imaging area for a predetermined time period includes:
  • the step of acquiring multiple face images of different faces in a shooting area at different times within a predetermined time period includes:
  • the face image of the person is acquired according to the face recognition technology at a first time interval between the predetermined time periods.
  • the step of acquiring multiple face images of different faces in a shooting area at different times within a predetermined time period includes:
  • a face image of a person matching a plurality of preset expression samples is acquired.
  • the step of determining a person who needs to obtain a face image in the imaging area includes:
  • the person is determined to be a person who needs to obtain a face image
  • the step of acquiring multiple face images of different faces in a shooting area at different times within a predetermined time period includes:
  • the grab button When it is detected that the grab button is triggered, the face image currently displayed by the face in the image capturing area is acquired.
  • the method further includes:
  • the captured image data formed by the stitching is displayed on the display interface.
  • the step of splicing the acquired plurality of face images according to the preset mosaic template to form the captured image data includes:
  • the face image is correspondingly filled into one of the blank tiles of the preset mosaic template in a preset order, wherein the preset mosaic template includes a plurality of blank tiles for filling the face image;
  • the step of encoding the captured image data into the captured image is performed.
  • the step of encoding the captured image data into the captured image includes:
  • the captured image data is encoded to be a captured image.
  • an embodiment of the present disclosure provides an image generating apparatus, where the apparatus includes:
  • Obtaining a module configured to acquire, when the camera is started, acquiring a plurality of face images of different faces of a face in the imaging area within a predetermined time period;
  • the splicing module is configured to splicing the acquired plurality of face images according to the preset splicing template to form the captured image data;
  • the image generation module is configured to encode the captured image data into a captured image.
  • the acquisition module includes:
  • a first acquiring unit configured to acquire, when the camera is started to take a photo, acquiring a plurality of face images of different faces in a shooting area within a predetermined time period;
  • the second obtaining unit is configured to acquire a plurality of face images of different faces located in the imaging area for a predetermined time period when the camera starts to perform recording.
  • the acquisition module includes:
  • a first determining unit configured to determine a person who is located in the imaging area and needs to acquire a face image
  • the third obtaining unit is configured to acquire a face image of the person according to the face recognition technology at a first time interval between the predetermined time periods.
  • the acquisition module includes:
  • a second determining unit configured to determine a person who is located in the imaging area and needs to acquire a face image
  • the fourth obtaining unit is configured to acquire a face image of the person that matches the plurality of preset expression samples according to the face recognition technology.
  • the obtaining module further includes:
  • a third determining unit configured to: when there is only one person in the imaging area, determine that the person is a person who needs to acquire a face image
  • the fourth determining unit is configured to detect, when a plurality of people are present in the imaging area, a touch operation for selecting one of the persons on the touch screen, and determine that the selected person of the touch operation is a person who needs to obtain a face image .
  • the acquisition module includes:
  • a first detecting unit configured to detect whether a grab button is triggered
  • the fifth obtaining unit is configured to acquire a face image currently displayed by the face in the imaging area when detecting that the grab button is triggered.
  • the device further comprises:
  • the display module is configured to display the captured image data formed by the stitching on the display interface.
  • the splicing module includes:
  • the filling unit is configured to fill the face image into one of the blank tiles of the preset mosaic template in a preset order after each of the face images is acquired, wherein the preset mosaic template includes a plurality of fillers for filling the face a blank plate of the image;
  • the trigger image generating module encodes the captured image data into a captured image.
  • the image generation module includes:
  • a second detecting unit configured to detect whether a stop button is triggered
  • the image generating unit is configured to encode the captured image data into a captured image when it is detected that the stop button is triggered.
  • an embodiment of the present disclosure provides a terminal device, where the terminal device includes the foregoing image generating device.
  • the image generating method, device, and terminal device acquire a plurality of face images of different faces of a face in the imaging area for a predetermined time period when the camera is activated;
  • the plurality of face images are stitched according to a preset stitching template to form captured image data;
  • the captured image data is encoded into a captured image, so that a plurality of face images of different faces at different times within a predetermined time period can be stitched together to form a photograph.
  • the image is fun and enhances the user experience.
  • FIG. 1 is a schematic flowchart diagram of an image generating method according to an embodiment of the present disclosure
  • FIG. 2a is a schematic diagram showing an example of a preset splicing template provided by an embodiment of the present disclosure
  • FIG. 2b is a schematic diagram showing another example of a preset splicing template provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart diagram of another method for generating an image according to an embodiment of the present disclosure
  • FIG. 4a is a schematic diagram showing a display interface of a mobile phone in a specific example of the present disclosure
  • Figure 4b shows a second schematic view of the display interface of the mobile phone in the specific example of the present disclosure
  • Figure 4c shows a third schematic view of the display interface of the mobile phone in the specific example of the present disclosure
  • 4d shows a fourth schematic diagram of a display interface of a mobile phone in a specific example of the present disclosure
  • FIG. 5 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram showing another structure of an image generating apparatus according to an embodiment of the present disclosure.
  • An embodiment of the present disclosure provides an image generating method, which can be applied to a terminal device having a camera function, such as a mobile phone, a tablet computer, an e-book reader, and an MP3 (Motion Picture Experts Group Audio Layer III). Player, MP4 (Moving Picture Experts Group Audio Layer IV) player, laptop portable computer, car computer, desktop computer, set-top box, smart TV, wearable device, etc.
  • a terminal device having a camera function such as a mobile phone, a tablet computer, an e-book reader, and an MP3 (Motion Picture Experts Group Audio Layer III).
  • Player MP4 (Moving Picture Experts Group Audio Layer IV) player
  • laptop portable computer car computer
  • desktop computer set-top box
  • smart TV wearable device
  • FIG. 1 is a schematic flowchart of an image generating method provided by an embodiment of the present disclosure, the method may include the following steps:
  • Step 101 When the camera is activated, acquiring a plurality of face images of different faces located in the imaging area for a predetermined time period.
  • the predetermined time period is a time limit that is set based on the specific application scenario.
  • the predetermined time period may be determined by the terminal device based on a default value determined by the actual application scenario, or may be set by the user according to actual needs. In this embodiment, This is not limited.
  • the mobile phone with the front camera and the rear camera is taken as an example, and the embodiment can be applied to the case where the front camera is activated, that is, when the front camera is activated, the image is located in the image capturing area of the front camera. a plurality of face images of a specific face at different times within a predetermined time period; this embodiment can also be applied to the case where the rear camera is activated, that is, when the rear camera is activated, the image capturing area located in the rear camera is acquired. Multiple face images of different faces within a predetermined time period within a predetermined time period.
  • the embodiment can also be applied to the situation that the front camera and the rear camera are simultaneously activated.
  • a specific face in the imaging area of the front camera and the rear camera can be acquired.
  • the face recognition module can be used to acquire the face image by using the face recognition module on the terminal device.
  • the face recognition module is integrated in the HAL (Hardware Abstraction Layer) layer, which can be implemented by hard coding or by software.
  • the face recognition module can perform face recognition processing on the YUV video data collected in the camera area of the camera, and obtain the face region of the face in the imaging area according to the preset coordinates to obtain the face image of the specific face, and the specific processing method thereof It can be implemented according to the related art, and is not limited in this embodiment.
  • the face image may adopt a bitmap data format.
  • Step 102 splicing the acquired plurality of face images according to a preset mosaic template to form captured image data.
  • the preset splicing template includes a plurality of blank slabs for filling the face image
  • the preset splicing template can be set according to actual design requirements, and can be a template generated by default or a template generated by the user.
  • the preset splicing template can be as shown in Figures 2a and 2b. It should be understood that the preset splicing template shown in FIG. 2a and FIG. 2b is only an example, and the preset splicing template may be adjusted or modified as needed by a person skilled in the art, which is not limited in this disclosure.
  • the size of the acquired plurality of face images can be adaptively adjusted to meet the splicing requirements of the preset splicing template.
  • step 103 the captured image data is encoded to be a captured image.
  • the captured image data formed by the stitching is encoded to be a captured image.
  • the captured image data may be encoded into a captured image in a JPG format.
  • the captured image is named and saved in a predetermined naming convention.
  • the predetermined naming rule can be implemented according to the related art, which is not limited in this embodiment.
  • the embodiment is applicable to a situation in which the camera starts to take a photo or a situation in which the camera starts to perform recording. That is, the step 101 may include: when the camera starts to take a photo, acquiring a plurality of face images of a face in the imaging area at different times within a predetermined time period; or, when the camera starts to perform recording, acquiring the image located in the imaging area Multiple face images of different faces at different times within a predetermined time period.
  • the terminal device can acquire the face image by itself according to the time interval by setting the time interval.
  • the step of acquiring a plurality of face images located at different times of a face in the imaging area for a predetermined time period may include: determining a person located in the imaging area that needs to acquire a face image; The face image of the person is acquired according to the face recognition technology at a first time interval between the predetermined time periods.
  • the first time is a preset interval time, and the first time may be set according to actual design requirements. For example, the first time may be 1 second or 2 seconds.
  • the face image can be acquired by the terminal device identifying the expression feature of the face, thereby enhancing the use of interest.
  • the step of acquiring a plurality of face images of different faces of a face in the imaging area for a predetermined time period may include: determining one of the face images that need to be acquired in the imaging area Personnel; according to the face recognition technology, the face image of the person matching the plurality of preset expression samples is acquired.
  • the person when there is only one person in the imaging area, the person is determined to be a person who needs to acquire a face image; when there are a plurality of people in the imaging area, a touch on the touch screen for selecting one of the persons is detected. Operation, determining that the person selected by the touch operation is a person who needs to obtain a face image.
  • a plurality of preset expression samples are pre-configured, and the preset expression samples may be expression samples with large facial feature changes, such as laughter, anger, and grimace, when the terminal device obtains the determined person according to the face recognition technology.
  • the face image is acquired when the face image matches one of the plurality of preset expression samples.
  • the face image can be acquired by the user's subjective selection.
  • the step of acquiring a plurality of face images of a face in a shooting area at different times within a predetermined time period may include: detecting whether a grab button is triggered; when detecting The grab button is triggered to acquire the face image currently displayed on the face in the camera area.
  • the grab button can be a virtual button or a physical button.
  • step 102 the step of splicing the acquired plurality of face images according to the preset mosaic template to form the captured image data may include:
  • the face image is correspondingly filled into one of the blank tiles of the preset mosaic template in a preset order, wherein the preset mosaic template includes a plurality of blank tiles for filling the face image;
  • the step of encoding the captured image data into the captured image is performed.
  • step 103 is executed to encode the captured image data into a captured image.
  • the captured image data is automatically encoded into a captured image.
  • the plurality of face images may be simultaneously filled into the blank tiles of the preset mosaic template.
  • the step of encoding the captured image data into the captured image may include: detecting whether a stop button is triggered; The stop button is triggered to encode the captured image data into a captured image.
  • the stop button may be a virtual button or a physical button.
  • the stop button is triggered, and at this time, the captured image data is encoded to be a captured image.
  • the captured image data may be incomplete image data, that is, a case where the blank tiles in the preset mosaic template are not completely filled.
  • FIG. 3 is another schematic flowchart of an image generating method provided by an embodiment of the present disclosure
  • the method may further include:
  • Step 104 Display the captured image data formed by the splicing on the display interface.
  • the captured image data formed by the splicing may be displayed on the display interface with a predetermined transparency, so that the user can view the captured image data formed by the splicing without affecting the user to view the image in the imaging area.
  • the display interface may be divided into a plurality of display areas that do not overlap each other. At this time, the captured image data formed by the stitching and the image in the imaging area may be displayed in different display areas.
  • the touch screen is integrated with the display interface.
  • a "Start Photograph” button 410 is displayed on the display interface of the mobile phone 400, and the "Start Photograph” button 410 is a virtual button.
  • the mobile phone detects that the "start photographing” button 410 is triggered, at this time, the rear camera (not shown) of the mobile phone 400 is activated and photographed, and for a predetermined period of time.
  • the face image of the person located in the imaging area of the rear camera (for convenience of explanation, assuming only one person in the imaging area) is cached according to the face recognition technology for 2 seconds, and the face image is bitmap data. format. After each face image is acquired, the face image is correspondingly filled in a preset order to a blank tile of the preset mosaic template (here, the preset mosaic template shown in FIG. 2a is used), and the captured image data is formed. At the same time, the formed captured image data is displayed on the display interface. As shown in FIG.
  • the captured image data of the three face images are displayed on the display interface, and the “stop shooting” button 420 (ie, the stop button) is also displayed on the display interface, and the “stop shooting” button 420 is Virtual Key.
  • the mobile phone 400 detects that the "stop photographing” button 420 is triggered, and encodes the captured image data into a captured image in the JPG format. If the user does not click the “stop photographing” button 420, after each blank panel of the preset stitching template is filled with the face image, the captured image data is encoded and generated into a captured image in the JPG format, and the specific example does not consider exiting the photographing. , shutdown, etc., to end the image generation method.
  • the mobile phone 400 is also taken as an example.
  • the touch screen is integrated with the display interface.
  • a "Start Photograph” button 410 is displayed on the display interface of the mobile phone 400, and the "Start Photograph” button 410 is a virtual button.
  • the mobile phone detects that the "start photographing” button 410 is triggered, the rear camera (not shown) of the mobile phone 400 is activated and photographed, at this time, as shown in FIG. 4c.
  • buttons 430 are virtual buttons.
  • the mobile phone 400 detects that the "grab” button 430 is triggered, and acquires the current display of the face in the imaging area according to the face recognition technology. Face image and cache. As shown in FIG.
  • the captured image data of the three face images are displayed on the display interface, and the “stop shooting” button 420 and the “grab” button 430 are displayed at the same time; if the user clicks the “stop shooting” at this time, At the button 420, the mobile phone 400 detects that the "stop photographing” button 420 is triggered, and encodes the captured image data into a captured image in the JPG format. If the user does not click the “stop photographing” button 420, after each blank panel of the preset stitching template is filled with the face image, the captured image data is encoded and generated into a captured image in the JPG format, and the specific example does not consider exiting the photographing. , shutdown, etc., to end the image generation method.
  • the image generating method provided by the embodiment of the present disclosure acquires a plurality of face images of different faces of a face in a predetermined time period in the imaging area when the camera is activated; and the plurality of acquired face images are spliced according to a preset mosaic template.
  • an embodiment of the present disclosure further provides an apparatus for implementing the foregoing method.
  • an embodiment of the present disclosure provides an image generating apparatus, which may include: an obtaining module 510, a splicing module 520, and an image generating module 530. .
  • the obtaining module 510 is configured to acquire, when the camera is started, a plurality of face images located at different times of a face in the imaging area within a predetermined time period;
  • a splicing module 520 configured to splicing the acquired plurality of face images according to a preset splicing template to form captured image data
  • the image generation module 530 is configured to encode the captured image data into a captured image.
  • the obtaining module 510 can include: a first acquiring unit and a second acquiring unit.
  • a first acquiring unit configured to acquire, when the camera is started to take a photo, acquiring a plurality of face images of different faces in a shooting area within a predetermined time period;
  • a second acquiring unit configured to acquire, when the camera starts to perform recording, acquiring a plurality of face images of different faces located in the imaging area for a predetermined time period.
  • the acquisition module 510 may include: a first determining unit and a third obtaining unit.
  • a first determining unit configured to determine a person who needs to acquire a face image in the imaging area
  • a third acquiring unit configured to acquire a face image of the person according to the face recognition technology at a first time interval between the predetermined time periods.
  • the acquisition module 510 may include: a second determining unit and a fourth obtaining unit.
  • a second determining unit configured to determine a person who needs to acquire a face image in the imaging area
  • a fourth acquiring unit configured to acquire, according to the face recognition technology, a face image of the person that matches the plurality of preset expression samples.
  • the obtaining module 510 may further include: a third determining unit and a fourth determining unit.
  • a third determining unit configured to: when there is only one person in the imaging area, determine that the person is a person who needs to obtain a face image
  • a fourth determining unit configured to detect, when a plurality of people are present in the imaging area, a touch operation for selecting one of the persons on the touch screen, and determine that the selected person of the touch operation is a person who needs to obtain a face image .
  • the acquisition module 510 includes: a first detection unit and a fifth acquisition unit.
  • a first detecting unit configured to detect whether a grab button is triggered
  • a fifth acquiring unit configured to: when detecting that the grab button is triggered, acquire a face image currently displayed by the face in the image capturing area.
  • FIG. 6 is another schematic structural diagram of an image generating apparatus according to an embodiment of the present disclosure.
  • the apparatus may further include: a display module 540.
  • the display module 540 is configured to display the captured image data formed by the stitching on the display interface.
  • the splicing module 520 can include: a filling unit.
  • a filling unit configured to fill the face image into one of the blank tiles of the preset stitching template in a preset order after the image of the face is acquired, wherein the preset stitching template includes multiple holes for filling the face a blank plate of the image;
  • the trigger image generating module encodes the captured image data into a captured image.
  • the image generation module 530 may include: a second detection unit and an image generation unit.
  • a second detecting unit configured to detect whether a stop button is triggered
  • the image generating unit is configured to encode the captured image data into a captured image when it is detected that the stop button is triggered.
  • the image generating apparatus provided in this embodiment is the same as the image generating method provided in the foregoing embodiment.
  • the specific implementation process is described in detail in the description of the method. To avoid repetition, details are not described herein again.
  • an embodiment of the present disclosure further provides a terminal device, where the terminal device includes the above image generating device.
  • the terminal device is a terminal device with a camera function, such as a mobile phone, a tablet computer, an e-book reader, and an MP3. (Moving Picture Experts Group Audio Layer III) player, MP4 (Moving Picture Experts Group Audio Layer IV) player, laptop portable computer, car Computers, desktop computers, set-top boxes, smart TVs, wearable devices, etc.
  • the above-mentioned image generating device has the above-mentioned technical effects. Therefore, the terminal device having the image generating device should also have corresponding technical effects.
  • the specific implementation process is similar to the above embodiment, and will not be described again.
  • the image generating apparatus and the terminal device acquire a plurality of face images of different faces of a face in the imaging area at a different time in the imaging area when the camera is activated; and the plurality of acquired face images are preset according to the preset
  • the splicing template is spliced to form captured image data; the captured image data is encoded into a captured image, so that a plurality of face images of different faces at different times within a predetermined time period can be spliced to form a captured image, which has good interest. Improve the user experience.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product. Based on such understanding, a portion of the technical solution of the present disclosure that contributes in essence or to the related art or a part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several The instructions are for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
  • the present disclosure is applicable to the field of electronic technology, and is used to splicing a plurality of face images of a specific face at different times within a predetermined time period to form a captured image, thereby having good taste and improving user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本公开提供一种图像生成方法、装置及终端设备,其中,该方法包括:当摄像头启动时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像;将获取到的多个人脸图像按照预设拼接模板拼接形成拍摄图像数据;将拍摄图像数据编码生成为拍摄图像。本公开实施例提供的图像生成方法、装置及终端设备,可以将特定的人脸在预定时间段内的不同时间的多个人脸图像进行拼接形成拍摄图像,具备良好的趣味性,提高用户体验。

Description

一种图像生成方法、装置及终端设备 技术领域
本公开涉及电子技术领域,特别是指一种图像生成方法、装置及终端设备。
背景技术
如今,使用终端设备的摄像头进行拍照和录像已成为用户的常见娱乐方式。在现有的终端设备中,摄像头拍照和录像功能也越来越多,例如自拍、全景拍摄等,但这些功能都较为中规中矩,不能给用户带来足够趣味性。
发明内容
本公开提供一种图像生成方法、装置及终端设备,以解决相关技术未能够给用户提供趣味性的问题。
第一方面,本公开实施例提供一种图像生成方法,该方法包括:
当摄像头启动时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像;
将获取到的多个人脸图像按照预设拼接模板拼接形成拍摄图像数据;
将拍摄图像数据编码生成为拍摄图像。
其中,当摄像头启动时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像的步骤包括:
当摄像头启动进行拍照时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像;或者,
当摄像头启动进行录像时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像。
其中,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像的步骤包括:
确定位于摄像区域内需要获取人脸图像的人员;
在预定时间段内每间隔第一时间,根据人脸识别技术获取人员的人脸图像。
其中,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像的步骤包括:
确定位于摄像区域内需要获取人脸图像的一人员;
根据人脸识别技术,获取人员的与多个预设表情样本匹配的人脸图像。
其中,确定位于摄像区域内需要获取人脸图像的一人员的步骤包括:
当摄像区域内仅存在一个人员时,确定人员为需要获取人脸图像的人员;
当摄像区域内存在多个人员时,在检测到作用于触摸屏上用于选定其中一个人员的触摸操作,确定触摸操作选定的人员为需要获取人脸图像的人员。
其中,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像的步骤包括:
检测一抓取按键是否被触发;
当检测到抓取按键被触发,获取摄像区域内,人脸当前显示的人脸图像。
其中,将获取到的多个人脸图像按照预设拼接模板拼接形成拍摄图像数据的步骤之后,该方法还包括:
在显示界面上显示拼接形成的拍摄图像数据。
其中,将获取到的多个人脸图像按照预设拼接模板拼接形成拍摄图像数据的步骤包括:
当每获取到一个人脸图像后,将人脸图像按照预设顺序对应填充至预设拼接模板的其中一空白板块中,其中预设拼接模板包括多个用于填充人脸图像的空白板块;
其中,当预设拼接模板的每一空白板块均填充有人脸图像之后,执行将拍摄图像数据编码生成为拍摄图像的步骤。
其中,将拍摄图像数据编码生成为拍摄图像的步骤包括:
检测一停止按键是否被触发;
当检测到停止按键被触发,将拍摄图像数据编码生成为拍摄图像。
第二方面,本公开实施例提供一种图像生成装置,该装置包括:
获取模块,设置为当摄像头启动时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像;
拼接模块,设置为将获取到的多个人脸图像按照预设拼接模板拼接形成拍摄图像数据;
图像生成模块,设置为将拍摄图像数据编码生成为拍摄图像。
其中,获取模块包括:
第一获取单元,设置为当摄像头启动进行拍照时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像;或者,
第二获取单元,设置为当摄像头启动进行录像时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像。
其中,获取模块包括:
第一确定单元,设置为确定位于摄像区域内需要获取人脸图像的人员;
第三获取单元,设置为在预定时间段内每间隔第一时间,根据人脸识别技术获取人员的人脸图像。
其中,获取模块包括:
第二确定单元,设置为确定位于摄像区域内需要获取人脸图像的一人员;
第四获取单元,设置为根据人脸识别技术,获取人员的与多个预设表情样本匹配的人脸图像。
其中,获取模块还包括:
第三确定单元,设置为当摄像区域内仅存在一个人员时,确定人员为需要获取人脸图像的人员;
第四确定单元,设置为当摄像区域内存在多个人员时,在检测到作用于触摸屏上用于选定其中一个人员的触摸操作,确定触摸操作选定的人员为需要获取人脸图像的人员。
其中,获取模块包括:
第一检测单元,设置为检测一抓取按键是否被触发;
第五获取单元,设置为当检测到抓取按键被触发,获取摄像区域内,人脸当前显示的人脸图像。
其中,该装置还包括:
显示模块,设置为在显示界面上显示拼接形成的拍摄图像数据。
其中,拼接模块包括:
填充单元,设置为当每获取到一个人脸图像后,将人脸图像按照预设顺序对应填充至预设拼接模板的其中一空白板块中,其中预设拼接模板包括多个用于填充人脸图像的空白板块;
其中,当预设拼接模板的每一空白板块均填充有人脸图像之后,触发图像生成模块将拍摄图像数据编码生成为拍摄图像。
其中,图像生成模块包括:
第二检测单元,设置为检测一停止按键是否被触发;
图像生成单元,设置为当检测到停止按键被触发,将拍摄图像数据编码生成为拍摄图像。
第三方面,本公开实施例提供一种终端设备,该终端设备包括上述图像生成装置。
与相关技术相比,本公开实施例提供的图像生成方法、装置及终端设备,在摄像头启动时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像;将获取到的多个人脸图像按照预设拼接模板拼接形成拍摄图像数据;将拍摄图像数据编码生成为拍摄图像,从而可以将特定的人脸在预定时间段内的不同时间的多个人脸图像进行拼接形成拍摄图像,具备良好的趣味性,提高用户体验。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对本公开实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据 这些附图获得其他的附图。
图1表示本公开实施例提供的图像生成方法的一种流程示意图;
图2a表示本公开实施例提供的预设拼接模板的一种示例示意图;
图2b表示本公开实施例提供的预设拼接模板的另一种示例示意图;
图3表示本公开实施例提供的图像生成方法的另一种流程示意图;
图4a表示本公开具体示例中手机的显示界面的示意图之一;
图4b表示本公开具体示例中手机的显示界面的示意图之二;
图4c表示本公开具体示例中手机的显示界面的示意图之三;
图4d表示本公开具体示例中手机的显示界面的示意图之四;
图5表示本公开实施例提供的图像生成装置的一种结构示意图;
图6表示本公开实施例提供的图像生成装置的另一种结构示意图。
具体实施方式
为使本公开要解决的技术问题、技术方案和优点更加清楚,下面将结合附图及具体实施例进行详细描述。
本公开实施例提供一种图像生成方法,可以应用于具备摄像头功能的终端设备,如手机、平板电脑、电子书阅读器、MP3(动态影像专家压缩标准音频层面3,Moving Picture Experts Group Audio Layer III)播放器、MP4(动态影像专家压缩标准音频层面4,Moving Picture Experts Group Audio Layer IV)播放器、膝上型便携计算机、车载电脑、台式计算机、机顶盒、智能电视机、可穿戴设备等。
请参见图1,其示出的是本公开实施例提供的图像生成方法的流程示意图,该方法可以包括以下步骤:
步骤101,当摄像头启动时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像。
上述步骤中,在摄像头启动时,获取位于摄像头的摄像区域内某一特定的人脸在一预定时间段内的不同时间的多个人脸图像。其中,预定时间段是基于具体应用场景设定的一个时间限定,该预定时间段可以由终端设备基于实际应用场景确定的一个默认值,也可以由用户根据实际需要进行设定,该实施例中不对此进行限定。
在一示例中,以具备前置摄像头和后置摄像头的手机为例,该实施例可以适用于前置摄像头启动的情形,即在前置摄像头启动时,获取位于该前置摄像头的摄像区域内一个特定的人脸在预定时间段内的不同时间的多个人脸图像;该实施例也可以适用于后置摄像头启动的情形,即在后置摄像头启动时,获取位于该后置摄像头的摄像区域内的一个特定人脸在预定时间段内的不同时间的多个人脸图像。此外,该实施例还可以适用于前置摄像头和后置摄像头同时启动的情形,此时,在一种应用中,可以获取位于前置摄像头和后置摄像头的摄像区域内某一个特定的人脸在预定时间段内的 不同时间的多个人脸图像;在另一种应用中,可以分别获取位于前置摄像头的摄像区域内一个特定的人脸在预定时间段内的不同时间的多个人脸图像和位于后置摄像头的摄像区域内的另一个特定的人脸在预定时间段内的不同时间的多个人脸图像。
其中,该步骤中,可以通过调用终端设备上的人脸识别模块采用人脸识别技术获取人脸图像。该人脸识别模块被集成在HAL(Hardware Abstraction Layer,硬件抽象层)层,一般可以通过硬编码实现,也可以通过软件实现。该人脸识别模块可以将摄像头的摄像区域采集的YUV视频数据进行人脸识别处理,根据预设坐标得到摄像区域内人脸所在区域,以获取到特定人脸的人脸图像,其具体处理方法可以根据相关技术实现,本实施例不作限定。
另外,为保证所获取的人脸图像的色彩还原以及保真程度,人脸图像可以采用位图数据格式。
步骤102,将获取到的多个人脸图像按照预设拼接模板拼接形成拍摄图像数据。
该步骤中,预设拼接模板包括多个用于填充人脸图像的空白板块,预设拼接模板可根据实际设计需求进行设定,可以为默认生成的模板,也可以为用户自定义生成的模板。例如,该预设拼接模板可以如图2a和图2b所示。应理解,图2a和图2b所示出的预设拼接模板仅为示例,本领域技术人员可根据需要对预设拼接模板进行调整或修改,本公开对此不做限定。这里,在将多个人脸图像按照预设拼接模板拼接时,可以对获取到多个人脸图像的尺寸进行适应性调整,以满足预设拼接模板的拼接要求。
步骤103,将拍摄图像数据编码生成为拍摄图像。
该步骤中,将拼接形成的拍摄图像数据经编码生成为拍摄图像。其中,可选的,为保证图像压缩效率以及广泛运用性,可以将拍摄图像数据编码生成为JPG格式的拍摄图像。并且,在生成为拍摄图像的同时,以预定命名规则命名拍摄图像并保存,这里,预定命名规则可以根据相关技术实现,本实施例不作限定。
其中,在具体应用场景中,该实施例适用于摄像头启动进行拍照的情形或者摄像头启动进行录像的情形。也就是说,步骤101可以包括:当摄像头启动进行拍照时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像;或者,当摄像头启动进行录像时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像。
另外,获取人脸在预定时间段内的不同时间的多个人脸图像的方式可以有多种。
例如,可以通过设定时间间隔,由终端设备根据时间间隔自行获取人脸图像。在一些可选的实施方式中,步骤101,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像的步骤可以包括:确定位于摄像区域内需要获取人脸图像的人员;在预定时间段内每间隔第一时间,根据人脸识别技术获取人员的人脸图像。
这里,当摄像区域内仅存在一个人员时,确定该人员为需要获取人脸图像的人员;当摄像区域内存在多个人员时,在检测到作用于触摸屏上用于选定其中一个人员的触 摸操作,确定触摸操作选定的人员为需要获取人脸图像的人员。其中,第一时间为预先设定的一个间隔时间,该第一时间可以根据实际设计需求进行设定,例如,该第一时间可以为1秒或2秒。
再例如,可以通过终端设备识别人脸的表情特征来获取人脸图像,以此可以加强使用趣味性。在一些可选的实施方式中,步骤101中,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像的步骤可以包括:确定位于摄像区域内需要获取人脸图像的一人员;根据人脸识别技术,获取人员的与多个预设表情样本匹配的人脸图像。
这里,当摄像区域内仅存在一个人员时,确定该人员为需要获取人脸图像的人员;当摄像区域内存在多个人员时,在检测到作用于触摸屏上用于选定其中一个人员的触摸操作,确定触摸操作选定的人员为需要获取人脸图像的人员。其中,预先配置有多个预设表情样本,该预设表情样本可以为例如大笑、愤怒、鬼脸等具备较大人脸特征变化的表情样本,当终端设备根据人脸识别技术获取到的确定人员的人脸图像与该多个预设表情样本中的某一预设表情样本相匹配时,获取该人脸图像。
又例如,可以通过用户主观选择的方式来获取人脸图像。在一些可选的实施方式中,步骤101中,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像的步骤可以包括:检测一抓取按键是否被触发;当检测到抓取按键被触发,获取摄像区域内,人脸当前显示的人脸图像。这里,在检测到该抓取按键被触发时,根据人脸识别技术获取摄像区域内,特定的人脸当前显示的人脸图像。其中,该抓取按键可以为虚拟按键,也可以为实体按键。
另外,在一些可选的实施方式中,步骤102,将获取到的多个人脸图像按照预设拼接模板拼接形成拍摄图像数据的步骤可以包括:
当每获取到一个人脸图像后,将人脸图像按照预设顺序对应填充至预设拼接模板的其中一空白板块中,其中预设拼接模板包括多个用于填充人脸图像的空白板块;其中,当预设拼接模板的每一空白板块均填充有人脸图像之后,执行将拍摄图像数据编码生成为拍摄图像的步骤。
这里,在每获取到一个人脸图像后,即以预设顺序将该人脸图像填充至预设拼接模板中的一个空白板块中,并在预设拼接模板中每一空白板块均已填充人脸图像后,执行步骤103,将拍摄图像数据编码生成为拍摄图像。此时,在预设拼接模板的每一空白板块填充完成后,自动将拍摄图像数据编码生成为拍摄图像。当然,作为上述实施方式的一种变形,可以在获取到多个人脸图像后再将该多个人脸图像同时分别填充至预设拼接模板的空白板块中。
另外,为满足实际应用场景中用户的操作需求,在一些可选的实施方式中,步骤103,将拍摄图像数据编码生成为拍摄图像的步骤可以包括:检测一停止按键是否被触发;当检测到停止按键被触发,将拍摄图像数据编码生成为拍摄图像。
这里,该停止按键可以为虚拟按键,也可以为实体按键。当用户点击停止按键时,该停止按键被触发,此时将拍摄图像数据编码生成为拍摄图像。应理解,在该实施例中,拍摄图像数据可以为不完整的图像数据,即预设拼接模板中的空白板块并未全部填充的情形。
另外,参见图3,其示出的是本公开实施例提供的图像生成方法的另一种流程示意图,本公开实施例中,为便于用户直观查看人脸图像按照预设拼接模板拼接的效果,在步骤102,将获取到的多个人脸图像按照预设拼接模板拼接形成拍摄图像数据的步骤之后,该方法还可以包括:
步骤104,在显示界面上显示拼接形成的拍摄图像数据。
在一可选的实施方式中,可以将拼接形成的拍摄图像数据以预定透明度在显示界面上显示,使得在便于用户直观查看拼接形成的拍摄图像数据的同时,不影响用户查看摄像区域内的图像。
另外,在一可选的实施方式中,可以将显示界面划分为多个互不重叠的显示区域,此时可以在不同显示区域内显示拼接形成的拍摄图像数据和摄像区域内的图像。
参见图4a和图4b,在一具体示例中,以手机400为例,该手机400中,触摸屏与显示界面集成为一体。如图4a,在手机400的显示界面上显示有“开始拍照”按键410,该“开始拍照”按键410为虚拟按键。当用户点击该“开始拍照”按键410,即手机检测到该“开始拍照”按键410被触发,此时,手机400的后置摄像头(图未示出)启动并进行拍照,并在预定时间段内每间隔2秒根据人脸识别技术获取位于后置摄像头的摄像区域内的人员(为便于说明,假定摄像区域内仅存在一个人员)的人脸图像并缓存,该人脸图像为位图数据格式。在每获取到一个人脸图像后,将人脸图像按照预设顺序对应填充至预设拼接模板(这里采用如图2a所示的预设拼接模板)的空白板块中,并形成拍摄图像数据,同时在显示界面上显示形成的拍摄图像数据。如图4b所示,此时显示界面上显示拼接有3个人脸图像的拍摄图像数据,同时显示界面上还显示有“停止拍照”按键420(即停止按键),该“停止拍照”按键420为虚拟按键。此时如果用户点击该“停止拍照”按键420,手机400检测到“停止拍照”按键420被触发,将拍摄图像数据编码生成为JPG格式的拍摄图像。如果用户未点击“停止拍照”按键420,则当预设拼接模板的每一空白板块均填充有人脸图像之后,将拍摄图像数据编码生成为JPG格式的拍摄图像,该具体示例中不考虑退出拍照、关机等结束该图像生成方法的情形。
参见图4a、图4c及图4d,在另一具体示例中,同样以手机400为例,该手机400中,触摸屏与显示界面集成为一体。如图4a,在手机400的显示界面上显示有“开始拍照”按键410,该“开始拍照”按键410为虚拟按键。当用户点击该“开始拍照”按键410,即手机检测到该“开始拍照”按键410被触发,手机400的后置摄像头(图未示出)启动并进行拍照,此时,如图4c所示,显示界面上显示后置摄像头的摄像 区域内的图像(图未示出),并显示有“停止拍照”按键420(即停止按键)以及“抓取”按键430(即抓取按键),该“停止拍照”按键420和“抓取”按键430均为虚拟按键。当用户点击“抓取”按键430时,(假定摄像区域内仅存在一个人员),手机400检测到该“抓取”按键430被触发,根据人脸识别技术获取摄像区域内的人脸当前显示的人脸图像并缓存。如图4d所示,此时显示界面上显示拼接有3个人脸图像的拍摄图像数据,同时显示有“停止拍照”按键420及“抓取”按键430;此时如果用户点击该“停止拍照”按键420,手机400检测到“停止拍照”按键420被触发,将拍摄图像数据编码生成为JPG格式的拍摄图像。如果用户未点击“停止拍照”按键420,则当预设拼接模板的每一空白板块均填充有人脸图像之后,将拍摄图像数据编码生成为JPG格式的拍摄图像,该具体示例中不考虑退出拍照、关机等结束该图像生成方法的情形。
本公开实施例提供的图像生成方法,在摄像头启动时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像;将获取到的多个人脸图像按照预设拼接模板拼接形成拍摄图像数据;将拍摄图像数据编码生成为拍摄图像,从而可以将特定的人脸在预定时间段内的不同时间的多个人脸图像进行拼接形成拍摄图像,具备良好的趣味性,提高用户体验。
基于本公开实施例提供的上述方法,本公开实施例还提供了一种用以实现上述方法的装置。
参见图5,其示出的是本公开实施例提供的图像生成装置的结构示意图,本公开实施例提供一种图像生成装置,该装置可以包括:获取模块510、拼接模块520以及图像生成模块530。
获取模块510,用于当摄像头启动时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像;
拼接模块520,用于将获取到的多个人脸图像按照预设拼接模板拼接形成拍摄图像数据;
图像生成模块530,用于将拍摄图像数据编码生成为拍摄图像。
其中,在一些可选的实施方式中,获取模块510可以包括:第一获取单元以及第二获取单元。
第一获取单元,用于当摄像头启动进行拍照时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像;或者,
第二获取单元,用于当摄像头启动进行录像时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像。
其中,在一些可选的实施方式中,获取模块510可以包括:第一确定单元以及第三获取单元。
第一确定单元,用于确定位于摄像区域内需要获取人脸图像的人员;
第三获取单元,用于在预定时间段内每间隔第一时间,根据人脸识别技术获取人员的人脸图像。
其中,在一些可选的实施方式中,获取模块510可以包括:第二确定单元以及第四获取单元。
第二确定单元,用于确定位于摄像区域内需要获取人脸图像的一人员;
第四获取单元,用于根据人脸识别技术,获取人员的与多个预设表情样本匹配的人脸图像。
另外,获取模块510还可以包括:第三确定单元以及第四确定单元。
第三确定单元,用于当摄像区域内仅存在一个人员时,确定人员为需要获取人脸图像的人员;
第四确定单元,用于当摄像区域内存在多个人员时,在检测到作用于触摸屏上用于选定其中一个人员的触摸操作,确定触摸操作选定的人员为需要获取人脸图像的人员。
其中,在一些可选的实施方式中,获取模块510包括:第一检测单元以及第五获取单元。
第一检测单元,用于检测一抓取按键是否被触发;
第五获取单元,用于当检测到抓取按键被触发,获取摄像区域内,人脸当前显示的人脸图像。
其中,参见图6,其示出的是本公开实施例提供的图像生成装置的另一种结构示意图,该装置还可以包括:显示模块540。
显示模块540,用于在显示界面上显示拼接形成的拍摄图像数据。
其中,拼接模块520可以包括:填充单元。
填充单元,用于当每获取到一个人脸图像后,将人脸图像按照预设顺序对应填充至预设拼接模板的其中一空白板块中,其中预设拼接模板包括多个用于填充人脸图像的空白板块;
其中,当预设拼接模板的每一空白板块均填充有人脸图像之后,触发图像生成模块将拍摄图像数据编码生成为拍摄图像。
其中,图像生成模块530可以包括:第二检测单元以及图像生成单元。
第二检测单元,用于检测一停止按键是否被触发;
图像生成单元,用于当检测到停止按键被触发,将拍摄图像数据编码生成为拍摄图像。
该实施例提供的图像生成装置与前述实施例提供的图像生成方法属于同一构思,其具体实现过程详见描述方法的实施例,为避免重复,这里不再赘述。
此外,本公开实施例还提供一种终端设备,该终端设备包括上述图像生成装置。该终端设备为具备摄像头功能的终端设备,如手机、平板电脑、电子书阅读器、MP3 (动态影像专家压缩标准音频层面3,Moving Picture Experts Group Audio Layer III)播放器、MP4(动态影像专家压缩标准音频层面4,Moving Picture Experts Group Audio Layer IV)播放器、膝上型便携计算机、车载电脑、台式计算机、机顶盒、智能电视机、可穿戴设备等。
由于上述任一种所述图像生成装置具有前述技术效果,因此,具有该图像生成装置的终端设备也应具备相应的技术效果,其具体实施过程与上述实施例类似,兹不赘述。
本公开实施例提供的图像生成装置和终端设备,在摄像头启动时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像;将获取到的多个人脸图像按照预设拼接模板拼接形成拍摄图像数据;将拍摄图像数据编码生成为拍摄图像,从而可以将特定的人脸在预定时间段内的不同时间的多个人脸图像进行拼接形成拍摄图像,具备良好的趣味性,提高用户体验。
对于前述的方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开并不受所描述的动作顺序的限制,因为依据本公开,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本公开所必需的。
需要说明的是,在发明实施例中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对相关技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述是本公开的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本公开所述原理的前提下,还可以作出若干改进和润饰,这些改进和 润饰也应视为本公开的保护范围。
工业实用性
本公开适用于电子技术领域,用以将特定的人脸在预定时间段内的不同时间的多个人脸图像进行拼接形成拍摄图像,从而具备良好的趣味性,提高用户体验。

Claims (19)

  1. 一种图像生成方法,包括:
    当摄像头启动时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像;
    将获取到的多个所述人脸图像按照预设拼接模板拼接形成拍摄图像数据;
    将所述拍摄图像数据编码生成为拍摄图像。
  2. 根据权利要求1所述的方法,其中所述当摄像头启动时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像的步骤包括:
    当所述摄像头启动进行拍照时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像;或者,
    当所述摄像头启动进行录像时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像。
  3. 根据权利要求1所述的方法,其中所述获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像的步骤包括:
    确定位于所述摄像区域内需要获取人脸图像的人员;
    在预定时间段内每间隔第一时间,根据人脸识别技术获取所述人员的人脸图像。
  4. 根据权利要求1所述的方法,其中所述获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像的步骤包括:
    确定位于所述摄像区域内需要获取人脸图像的一人员;
    根据人脸识别技术,获取所述人员的与多个预设表情样本匹配的人脸图像。
  5. 根据权利要求3或4所述的方法,其中所述确定位于所述摄像区域内需要获取人脸图像的一人员的步骤包括:
    当所述摄像区域内仅存在一个人员时,确定所述人员为需要获取人脸图像的人员;
    当所述摄像区域内存在多个人员时,在检测到作用于触摸屏上用于选定其中一个人员的触摸操作,确定所述触摸操作选定的人员为需要获取人脸图像的人员。
  6. 根据权利要求1所述的方法,其中所述获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像的步骤包括:
    检测一抓取按键是否被触发;
    当检测到所述抓取按键被触发时,获取所述摄像区域内,所述人脸当前显示的人脸图像。
  7. 根据权利要求1所述的方法,其中所述将获取到的多个所述人脸图像按照预设拼接模板拼接形成拍摄图像数据的步骤之后,所述方法还包括:
    在显示界面上显示拼接形成的所述拍摄图像数据。
  8. 根据权利要求1所述的方法,其中所述将获取到的多个所述人脸图像按照预设拼接模板拼接形成拍摄图像数据的步骤包括:
    当每获取到一个所述人脸图像后,将所述人脸图像按照预设顺序对应填充至所述预设拼接模板的其中一空白板块中,其中所述预设拼接模板包括多个用于填充所述人脸图像的空白板块;
    其中,当所述预设拼接模板的每一空白板块均填充有所述人脸图像之后,执行所述将所述拍摄图像数据编码生成为拍摄图像的步骤。
  9. 根据权利要求1所述的方法,其中所述将所述拍摄图像数据编码生成为拍摄图像的步骤包括:
    检测一停止按键是否被触发;
    当检测到所述停止按键被触发时,将所述拍摄图像数据编码生成为拍摄图像。
  10. 一种图像生成装置,包括:
    获取模块,用于当摄像头启动时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像;
    拼接模块,用于将获取到的多个所述人脸图像按照预设拼接模板拼接形成拍摄图像数据;
    图像生成模块,用于将所述拍摄图像数据编码生成为拍摄图像。
  11. 根据权利要求10所述的装置,其中所述获取模块包括:
    第一获取单元,用于当所述摄像头启动进行拍照时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像;或者,
    第二获取单元,用于当所述摄像头启动进行录像时,获取位于摄像区域内一人脸在预定时间段内的不同时间的多个人脸图像。
  12. 根据权利要求10所述的装置,其中所述获取模块包括:
    第一确定单元,用于确定位于所述摄像区域内需要获取人脸图像的人员;
    第三获取单元,用于在预定时间段内每间隔第一时间,根据人脸识别技术获取所述人员的人脸图像。
  13. 根据权利要求10所述的装置,其中所述获取模块包括:
    第二确定单元,用于确定位于所述摄像区域内需要获取人脸图像的一人员;
    第四获取单元,用于根据人脸识别技术,获取所述人员的与多个预设表情样本匹配的人脸图像。
  14. 根据权利要求12或13所述的装置,其中所述获取模块还包括:
    第三确定单元,用于当所述摄像区域内仅存在一个人员时,确定所述人员为需要获取人脸图像的人员;
    第四确定单元,用于当所述摄像区域内存在多个人员时,在检测到作用于触摸屏上用于选定其中一个人员的触摸操作,确定所述触摸操作选定的人员为需要获取人脸图像的人员。
  15. 根据权利要求10所述的装置,其中所述获取模块包括:
    第一检测单元,用于检测一抓取按键是否被触发;
    第五获取单元,用于当检测到所述抓取按键被触发时,获取所述摄像区域内,所述人脸当前显示的人脸图像。
  16. 根据权利要求10所述的装置,还包括:
    显示模块,用于在显示界面上显示拼接形成的所述拍摄图像数据。
  17. 根据权利要求10所述的装置,其中所述拼接模块包括:
    填充单元,用于当每获取到一个所述人脸图像后,将所述人脸图像按照预设顺序对应填充至所述预设拼接模板的其中一空白板块中,其中所述预设拼接模板包括多个用于填充所述人脸图像的空白板块;
    其中,当所述预设拼接模板的每一空白板块均填充有所述人脸图像之后,触发图像生成模块将所述拍摄图像数据编码生成为拍摄图像。
  18. 根据权利要求10所述的装置,其中所述图像生成模块包括:
    第二检测单元,用于检测一停止按键是否被触发;
    图像生成单元,用于当检测到所述停止按键被触发时,将所述拍摄图像数据编码生成为拍摄图像。
  19. 一种终端设备,其中所述终端设备包括如权利要求10至18任一项所述的图像生成装置。
PCT/CN2017/073511 2016-08-29 2017-02-14 一种图像生成方法、装置及终端设备 WO2018040510A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610752867.1 2016-08-29
CN201610752867.1A CN107786803A (zh) 2016-08-29 2016-08-29 一种图像生成方法、装置及终端设备

Publications (1)

Publication Number Publication Date
WO2018040510A1 true WO2018040510A1 (zh) 2018-03-08

Family

ID=61299955

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/073511 WO2018040510A1 (zh) 2016-08-29 2017-02-14 一种图像生成方法、装置及终端设备

Country Status (2)

Country Link
CN (1) CN107786803A (zh)
WO (1) WO2018040510A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064526A (zh) * 2018-06-07 2018-12-21 珠海格力电器股份有限公司 一种生成拼图的方法及装置
CN109064397A (zh) * 2018-07-04 2018-12-21 广州希脉创新科技有限公司 一种基于摄像耳机的图像拼接方法及系统
CN109977850A (zh) * 2019-03-23 2019-07-05 西安电子科技大学 基于人脸识别的课堂姓名提示方法
CN110008797A (zh) * 2018-10-08 2019-07-12 杭州中威电子股份有限公司 一种多摄像机多人脸视频接续采集装置及方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108737729A (zh) * 2018-05-04 2018-11-02 Oppo广东移动通信有限公司 自动拍照方法和装置
CN110580808B (zh) * 2018-06-08 2021-03-23 杭州海康威视数字技术股份有限公司 一种信息处理方法、装置、电子设备及智能交通系统
CN110348898B (zh) * 2019-06-28 2023-06-23 若瑞(上海)文化科技有限公司 一种基于人体识别的信息推送方法及装置
CN115760564A (zh) * 2021-09-03 2023-03-07 北京字跳网络技术有限公司 拼图方法、装置、电子设备、服务器及可读介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101001323A (zh) * 2006-12-30 2007-07-18 北京中星微电子有限公司 一种摄像头系统和在摄像头视频流中获取静态图像的方法
US20120147214A1 (en) * 2010-12-08 2012-06-14 Canon Kabushiki Kaisha Imaging apparatus, control method of the apparatus, and program
US20120162459A1 (en) * 2010-12-23 2012-06-28 Altek Corporation Image capturing apparatus and image patchwork method thereof
CN104243846A (zh) * 2013-06-19 2014-12-24 北京千橡网景科技发展有限公司 一种用于图像拼接的方法及装置
CN104574397A (zh) * 2014-12-31 2015-04-29 广东欧珀移动通信有限公司 一种图像处理的方法及移动终端
CN104618651A (zh) * 2015-01-30 2015-05-13 广东欧珀移动通信有限公司 一种拍照方法及装置
KR101598159B1 (ko) * 2015-03-12 2016-03-07 라인 가부시키가이샤 영상 제공 방법 및 영상 제공 장치
CN105812643A (zh) * 2014-12-30 2016-07-27 中兴通讯股份有限公司 拼图的处理方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4717539B2 (ja) * 2005-07-26 2011-07-06 キヤノン株式会社 撮像装置及び撮像方法
JP4999570B2 (ja) * 2007-06-18 2012-08-15 キヤノン株式会社 表情認識装置及び方法、並びに撮像装置
CN104144289B (zh) * 2013-05-10 2018-02-06 华为技术有限公司 拍照方法及装置
CN104008391A (zh) * 2014-04-30 2014-08-27 首都医科大学 一种基于非线性降维的人脸微表情捕捉及识别方法
CN103971131A (zh) * 2014-05-13 2014-08-06 华为技术有限公司 一种预设表情识别方法和装置
CN103986873B (zh) * 2014-05-28 2017-12-01 广州视源电子科技股份有限公司 一种显示设备拍摄方法及显示设备
CN105373784A (zh) * 2015-11-30 2016-03-02 北京光年无限科技有限公司 智能机器人数据处理方法及装置、智能机器人系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101001323A (zh) * 2006-12-30 2007-07-18 北京中星微电子有限公司 一种摄像头系统和在摄像头视频流中获取静态图像的方法
US20120147214A1 (en) * 2010-12-08 2012-06-14 Canon Kabushiki Kaisha Imaging apparatus, control method of the apparatus, and program
US20120162459A1 (en) * 2010-12-23 2012-06-28 Altek Corporation Image capturing apparatus and image patchwork method thereof
CN104243846A (zh) * 2013-06-19 2014-12-24 北京千橡网景科技发展有限公司 一种用于图像拼接的方法及装置
CN105812643A (zh) * 2014-12-30 2016-07-27 中兴通讯股份有限公司 拼图的处理方法及装置
CN104574397A (zh) * 2014-12-31 2015-04-29 广东欧珀移动通信有限公司 一种图像处理的方法及移动终端
CN104618651A (zh) * 2015-01-30 2015-05-13 广东欧珀移动通信有限公司 一种拍照方法及装置
KR101598159B1 (ko) * 2015-03-12 2016-03-07 라인 가부시키가이샤 영상 제공 방법 및 영상 제공 장치

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064526A (zh) * 2018-06-07 2018-12-21 珠海格力电器股份有限公司 一种生成拼图的方法及装置
CN109064397A (zh) * 2018-07-04 2018-12-21 广州希脉创新科技有限公司 一种基于摄像耳机的图像拼接方法及系统
CN109064397B (zh) * 2018-07-04 2023-08-01 广州希脉创新科技有限公司 一种基于摄像耳机的图像拼接方法及系统
CN110008797A (zh) * 2018-10-08 2019-07-12 杭州中威电子股份有限公司 一种多摄像机多人脸视频接续采集装置及方法
CN110008797B (zh) * 2018-10-08 2021-12-14 杭州中威电子股份有限公司 一种多摄像机多人脸视频接续采集方法
CN109977850A (zh) * 2019-03-23 2019-07-05 西安电子科技大学 基于人脸识别的课堂姓名提示方法
CN109977850B (zh) * 2019-03-23 2023-01-06 西安电子科技大学 基于人脸识别的课堂姓名提示方法

Also Published As

Publication number Publication date
CN107786803A (zh) 2018-03-09

Similar Documents

Publication Publication Date Title
WO2018040510A1 (zh) 一种图像生成方法、装置及终端设备
WO2019192351A1 (zh) 短视频拍摄方法、装置及电子终端
US10367997B2 (en) Enriched digital photographs
JP6456593B2 (ja) 動画コンテンツの分析に基づいて触覚フィードバックを生成する方法及び装置
KR101545883B1 (ko) 단말의 카메라 제어 방법 및 그 단말
US9584735B2 (en) Front and back facing cameras
TWI255141B (en) Method and system for real-time interactive video
WO2019237745A1 (zh) 人脸图像处理方法、装置、电子设备及计算机可读存储介质
WO2015196920A1 (zh) 动态影像的拍摄方法和拍摄装置
KR20110043612A (ko) 이미지 처리
JP2012023595A (ja) 会議システム
TW200406122A (en) System and method for remote controlled photography
TWI578782B (zh) 基於場景辨識的影像處理
US9706102B1 (en) Enhanced images associated with display devices
US9137461B2 (en) Real-time camera view through drawn region for image capture
JP6445707B2 (ja) 画像処理方法及び装置
WO2018076939A1 (zh) 视频文件的处理方法和装置
CN106485653B (zh) 用户终端及全景图片动态缩略图的生成方法
WO2014161386A1 (zh) 一种摄像设备及其实现拍照的方法
TW202042060A (zh) 用於控制虛擬實境設備的電腦可實現方法、虛擬實境設備及用於控制第一行動裝置和第二行動裝置之電腦可實現方法
JP2016504828A (ja) 単一のカメラを用いて3d画像を取り込む方法およびシステム
TW201218769A (en) 3D digital image monitor system and method
US20140181745A1 (en) Image capture
CN116939275A (zh) 直播虚拟资源展示方法、装置、电子设备、服务器及介质
WO2023174009A1 (zh) 基于虚拟现实的拍摄处理方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17844821

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17844821

Country of ref document: EP

Kind code of ref document: A1