WO2019218879A1 - Photographing interaction method and apparatus, storage medium and terminal device - Google Patents

Photographing interaction method and apparatus, storage medium and terminal device Download PDF

Info

Publication number
WO2019218879A1
WO2019218879A1 PCT/CN2019/085459 CN2019085459W WO2019218879A1 WO 2019218879 A1 WO2019218879 A1 WO 2019218879A1 CN 2019085459 W CN2019085459 W CN 2019085459W WO 2019218879 A1 WO2019218879 A1 WO 2019218879A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
posture
data
dimensional data
evaluation information
Prior art date
Application number
PCT/CN2019/085459
Other languages
French (fr)
Chinese (zh)
Inventor
刘耀勇
陈岩
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019218879A1 publication Critical patent/WO2019218879A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the embodiments of the present invention relate to the technical field of terminal devices, for example, to a photo interaction method, device, storage medium, and terminal device.
  • the terminal device is more and more convenient to take pictures, people's attitude towards taking pictures will become more casual.
  • the user may take multiple photos and then select satisfactory photos from them.
  • the photographing interaction method, device, storage medium and terminal device provided by the embodiments of the present application can optimize the photographing operation of the user.
  • An embodiment of the present application provides a method for photographing interaction, including:
  • Corresponding photographing prompt content is determined according to the imaging posture evaluation information.
  • An embodiment of the present application provides a camera interaction device, including:
  • a three-dimensional data acquisition module configured to acquire, by the recognition camera, three-dimensional data of the posture of the imaging portion, in a case where the recognition camera captures an imaging portion of the user;
  • An evaluation determining module configured to identify the posture three-dimensional data by a preset imaging evaluation model to determine imaging posture evaluation information of the imaging portion;
  • the prompt determination module is configured to determine a corresponding shooting prompt content according to the imaging posture evaluation information.
  • the embodiment of the present application provides a computer readable storage medium.
  • the computer readable storage medium stores a computer program, and when the program is executed by the processor, the photo interaction method as described in the embodiment of the present application is implemented.
  • the embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and operable by the processor, and the processor executes the computer program to implement the embodiment of the present application.
  • Photo interaction method including a memory, a processor, and a computer program stored in the memory and operable by the processor, and the processor executes the computer program to implement the embodiment of the present application.
  • the photographing interaction scheme provided in the embodiment of the present application obtains the three-dimensional data of the posture of the imaged portion by using the recognition camera in the case that the image capturing portion of the user is captured by the recognition camera;
  • the posture three-dimensional data is identified to determine imaging posture evaluation information of the imaging portion; and the corresponding shooting prompt content is determined according to the imaging posture evaluation information.
  • FIG. 1 is a schematic flowchart of a photo interaction method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a scenario of a photo interaction method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of another scenario of a photo interaction method according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic flowchart diagram of another method for photographing interaction according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of another method for photographing interaction according to an embodiment of the present application.
  • FIG. 6 is a schematic flowchart diagram of another method for photographing interaction according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of initial three-dimensional data according to an embodiment of the present application.
  • FIG. 8 is a schematic flowchart diagram of another method for photographing interaction according to an embodiment of the present application.
  • FIG. 9 is a structural block diagram of a photo interaction device according to an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram of another terminal device according to an embodiment of the present disclosure.
  • the effect of shooting is very different even if the same person is photographed due to the difference in shooting angle and shooting distance.
  • the composition and shooting angle of the photograph may not be considered, so the effect of the photograph taken is often not very beautiful.
  • the embodiment of the present application can provide an optimization prompt for the user's photographing operation, so that the user can take a more beautiful photo.
  • FIG. 1 is a schematic flowchart of a method for photographing interaction according to an embodiment of the present disclosure.
  • the method may be implemented by a photo interaction device, where the device may be implemented by software and/or hardware, and may be integrated into a terminal device or integrated. In other devices with an operating system installed. As shown in FIG. 1, the method includes the following steps.
  • the image forming portion is a body part of a user included in the captured image by taking a picture by the recognition camera.
  • the imaged portion includes a body part above the waist of the user.
  • the identification camera may be a camera on the terminal device, may be a front camera of the terminal device, and/or a rear camera.
  • at least one camera is generally provided in the terminal device, and generally includes a front camera and a rear camera.
  • the recognition camera may be a front camera of the terminal device, so that the user can know the captured picture through the screen of the terminal device.
  • the identification camera may be a rear camera of the terminal device; the current user of the current terminal device uses the terminal device to capture images of other users. Part.
  • the image acquired by a conventional camera is generally two-dimensional data, that is, a color value (Red Green Blue (RGB) value) or a set of gray values of pixels arranged in a matrix of rows and columns.
  • the three-dimensional data also includes the depth information of the captured imaging portion, that is, the distance between the different spatial points on the body part of the captured user and the camera, so the three-dimensional data can represent the spatial information of the captured object.
  • the recognition camera may be a camera with a distance sensor, and the distance sensor may acquire the distance between the different spatial points on the captured object and the camera, so that the three-dimensional data of the captured imaging portion can be acquired.
  • the posture three-dimensional data includes: three-dimensional data of the imaging portion captured by the recognition camera; and position data of the imaging portion in the captured image.
  • the posture three-dimensional data may be a set of three-dimensional data acquired when the imaging portion is in a stationary posture.
  • the attitude three-dimensional data may also be a plurality of sets of three-dimensional data acquired when the imaging part makes a dynamic posture.
  • the imaging site comprises a user's head.
  • acquiring the three-dimensional posture data of the imaging portion by the recognition camera comprises: determining a posture feature portion of the head, and acquiring three-dimensional posture three-dimensional data of the posture feature portion of the head by the recognition camera.
  • the imaged portion is the user's head, it indicates that the user is taking a self-portrait through the terminal device, or is taking a photo of the user's ID.
  • the shooting angle is limited, so if you want to take a good-looking photo during self-timer, you must select a suitable angle. For example, if the user's ID photo is being taken, the ID photo of the ID is more demanding. If the standard is not met, the photo ID of the document will not be used.
  • the posture feature portion may be a feature portion for determining a posture of a user's head.
  • the posture feature portion includes two ears, and the posture of the head may be determined according to the positions of the two ears; the posture feature portion may further include a facial position of the face, such as the position of the eyes, the nose, and the mouth, etc. According to the position of the facial features, the posture of the head can be determined.
  • the three-dimensional data of the posture characteristic part can accurately determine the specific posture of the head, and the accuracy of the imaging posture evaluation can be further improved.
  • S1110 Identify the posture three-dimensional data by using a preset imaging evaluation model to determine imaging posture evaluation information of the imaging portion.
  • the posture three-dimensional data is actually a set of one or more sets of data, and further needs to analyze and recognize the three-dimensional data according to the set of data, which actually corresponds to the specific posture of the imaging part.
  • the imaging posture evaluation information includes: evaluation information of a photographing screen of an imaging portion of the user captured by the recognition camera, the evaluation information may be a difference between the photographing screen and an imaging standard, and the imaging standard may be a system pre- Assume.
  • the imaging avatar of the ID avatar includes the head at the center of the screen, the binocular front camera position, and the like.
  • the imaging evaluation model may be an identification system that has been trained to determine imaging posture evaluation information based on the posture three-dimensional data of the imaging site, and the imaging evaluation model may be pre-stored in the terminal device or pre-stored in the background server. When the three-dimensional data of the posture needs to be recognized, the pre-stored imaging evaluation model is called to identify the three-dimensional image of the posture to determine the imaging posture evaluation information of the photographed body part.
  • S1120 Determine corresponding shooting prompt content according to the imaging posture evaluation information.
  • the shooting prompt content corresponding to the posture evaluation information may be determined according to the preset mapping table.
  • the shooting prompt content may be a prompt content for prompting the user for the evaluation information of the currently captured shooting screen; the shooting prompt content may also be a prompt content for prompting the user to adjust the shooting operation, and the shooting operation may be a pair identification
  • the operation of the camera can also be the gesture adjustment operation of the user being photographed.
  • the shooting prompt content may be display data output through a screen of the terminal device, and may be text data, picture data or animation data, etc.; the shooting prompt content may also be sound data output through a speaker of the terminal device.
  • the content of the shooting prompt can be set according to the system preset or the user's selection, or can be set according to the actual application, which is not limited herein.
  • the user can make adjustments according to the content of the shooting prompt, so that the recognition camera can take a better photo.
  • the imaging posture evaluation information includes an error value of the imaging portion and an imaging standard; the error value includes an attitude error value and/or a composition error value.
  • determining the corresponding shooting prompt content according to the imaging posture evaluation information comprises: determining shooting priority content according to the error value; wherein the shooting prompt content is used to prompt the user to perform corresponding movement to reduce the error value .
  • prompting the user to perform the corresponding movement may include: the user controls the recognition camera to move and/or the user adjusts the movement of the imaging portion; after the user performs the corresponding movement, the error value is reduced, correspondingly the imaging portion and the imaging standard The error value is also reduced and the user can get a better photo.
  • the currently photographed is a photo of the user's ID
  • the user is photographed by the recognition camera 113
  • the imaging posture evaluation information obtained by acquiring the three-dimensional image of the posture of the imaging portion includes: an imaging portion
  • the composition error value with the imaging standard is large. Because in the currently photographed picture, the lens angle is too high, the head is located in the middle and lower position of the picture, and the head of the imaging standard of the document avatar is located at the center of the picture, so the composition error of the imaging part and the imaging standard in FIG. 2 The value is larger.
  • Corresponding to the determined shooting prompt content may include: prompting the user to lower the shooting angle of the recognition camera, or prompting to increase the height of the user's head.
  • the head of the user who recognizes the photographing of the camera is located at the center of the screen, which is closer to the imaging standard. Therefore, the recognition camera captures the adjusted imaging portion, and can generate a better photo of the ID card.
  • the imaging evaluation information obtained by the acquired imaging posture three-dimensional data may include: the posture error value of the imaging portion and the imaging standard is large.
  • the user can be prompted to adjust the gesture by shooting the prompt content, and/or adjust the position of the recognition camera.
  • the photographing interaction method provided in the embodiment of the present application is configured to acquire the three-dimensional data of the posture of the imaged portion by using the recognition camera in the case that the image capturing portion of the user is captured by the recognition camera;
  • the posture three-dimensional data is identified to determine imaging posture evaluation information of the imaging portion; and the corresponding shooting prompt content is determined according to the imaging posture evaluation information.
  • FIG. 4 is a schematic flowchart of another method for photographing interaction according to an embodiment of the present invention. Based on the technical solution provided by the foregoing embodiment, a description is given of determining a corresponding photographing prompt content according to the imaging posture evaluation information. In an embodiment, as shown in FIG. 4, the method includes the following steps.
  • S1200 Determine, when the recognition camera captures the head of the user, determine a posture feature portion of the head, and acquire, by the recognition camera, three-dimensional data of the posture of the posture feature portion of the head; and acquire the illumination of the head. Information, and determining the position of the light source based on the illumination information.
  • the illumination information of the head is the illumination value distribution information of the spatial point included in the head in the picture captured by the recognition camera; the illumination value may be performed according to the pixel value of the captured picture. determine.
  • the average illumination value of the head in the captured picture is lower than the average illumination value of the overall picture, it may be determined that the current shooting angle is backlit, and the picture of the head of the captured picture user may be too dark. Therefore, it is necessary to determine the position of the light source, and thus the user's face can be directed toward the light source to obtain a clear photographing picture.
  • the position of the light source may be determined according to the distribution information of the illumination values of the spatial points of the head; exemplarily, if the average illumination value of the left side of the head is greater than the average illumination value of the right side, it may be determined that the light source is located at the left of the head side. After the position of the light source is determined, the user can be further assisted in capturing a clearer picture.
  • S1210 Identify the posture three-dimensional data by using a preset imaging evaluation model to determine imaging posture evaluation information of the imaging portion.
  • the posture evaluation information includes an evaluation of the posture of the imaging portion in the photographing screen, and the light source position can determine the clear effect of the imaging portion in the photographing screen; therefore, the photographing image captured by the recognizing camera can be integrated according to the imaging posture evaluation information and the position of the light source. Judging, the corresponding can improve the accuracy of the evaluation of the shooting picture.
  • the obtained imaging posture evaluation information includes: in the currently photographed picture, the lens angle is high, causing the face to be located in the middle and lower position of the screen, deviating from the imaging standard. .
  • the average illumination value of the user's head is not uniform, and the average value of the illumination on the left side of the head is higher than the average value of the illumination on the right side, which does not meet the requirement of uniform illumination of the face of the document.
  • the user can further improve the posture adjustment or adjust the accuracy of the camera.
  • the accuracy of the evaluation of the captured image of the recognition camera can be improved, and the accuracy of the shooting prompt content can be further improved.
  • FIG. 5 is a schematic flowchart of another method for photographing interaction according to an embodiment of the present disclosure.
  • the three-dimensional data of the gesture is identified by using a preset imaging evaluation model.
  • the operation of determining the imaging posture evaluation information of the imaging site has been described.
  • the method includes the following steps.
  • S1310 Identify the posture three-dimensional data by using a preset imaging evaluation model to determine an imaging category of the imaging portion, and determine imaging posture evaluation information of the imaging portion according to the imaging category.
  • the types of photos include various types, for example, portrait photos include avatar photos, self photos, full body photos and photos, and the like, and different types of photos have different imaging standards.
  • the avatar requires a higher angle and posture for the head, while the half-length and full-body photos require a higher picture composition. Therefore, in the case that the user performs photographing by recognizing the camera, the imaging category may be first determined according to the three-dimensional data of the posture of the imaging portion, and then the corresponding imaging standard may be determined according to the imaging category, so that the imaging posture evaluation information of the imaging portion may be improved. Sex.
  • the embodiment of the present application identifies the three-dimensional data of the posture by using a preset imaging evaluation model to determine an imaging category of the imaging portion, and determines imaging posture evaluation information of the imaging portion according to the imaging category, by first determining The imaging category, thereby selecting an imaging standard corresponding to the imaging category, improves the accuracy of the evaluation of the imaging posture evaluation information, and further improves the operation efficiency of the user's photographing.
  • FIG. 6 is a schematic flowchart of another method for photographing interaction according to an embodiment of the present disclosure.
  • an operation of acquiring three-dimensional data of the posture of the imaged portion by using the identification camera is performed. It was explained.
  • the method includes the following steps.
  • the recognition camera captures an imaged portion of the user
  • the part depth data of the imaged portion and the part infrared data are acquired by the recognition camera.
  • the recognition camera is a three-dimensional (3D, three Dimensional) camera, and the three-dimensional camera includes various hardware structures, and may include: an infrared sensor, a distance sensor, a lens, and the like.
  • the part depth data is a set of distance values of the spatial point included in the imaging part from the recognition camera; the part depth data of the imaging part can be acquired by identifying the distance sensor in the camera.
  • the portion infrared data is a collection of infrared data reflected by a spatial point included in the imaging site.
  • the infrared sensor emits an infrared signal to the imaging site, and the imaging portion reflects the infrared information, and the infrared sensor can image the imaging portion according to the received infrared data.
  • the location depth data includes the distance value of the spatial point included in the imaging part, so the initial three-dimensional data of the imaging part can be determined according to the part depth data.
  • points a, b, c, and d in FIG. 7 are four spatial points, and X, Y, and Z axes represent spaces, wherein the Z axis represents depth data of spatial points, X and Y axes Indicates the coordinates of the plane position of the spatial point.
  • the depth data of point a is the largest, that is, the point a is the farthest distance from the recognition camera. It can be seen from Fig. 7 that a three-dimensional vertebral body can be formed according to the plane coordinates and depth data of the four spatial points, thereby The depth data of the location of the spatial point and the planar coordinates of the spatial point can determine the initial three-dimensional data.
  • the corresponding detail position in the initial three-dimensional data may cause data loss, so it is further necessary to correct the initial three-dimensional data according to the part infrared data.
  • S1420 Correct the initial three-dimensional data according to the part infrared data to obtain three-dimensional posture data of the imaging part.
  • the depth data of each spatial point and the infrared data are in one-to-one correspondence.
  • the infrared data corresponding to the depth data of the spatial point can measure and compare the overall initial three-dimensional data, and then perform feature complementation on the missing spatial points.
  • the infrared signal is an electromagnetic wave, and the human eye cannot see the infrared signal.
  • the infrared light can still propagate at night or when the environment is dark and there is no visible light, in a dark environment, the infrared data can also be generated. Clear imaging; in turn, the initial 3D data can be corrected based on the location infrared data.
  • the fitting relationship function may be established according to the depth data of the adjacent points and the infrared data, and the corresponding depth data is calculated according to the fitting relationship function and the infrared data of the missing space point, thereby obtaining the corrected posture three-dimensional.
  • Data wherein the missing spatial point is a spatial point where the depth data is missing, and the adjacent spatial point is an adjacent spatial point of the missing spatial point.
  • S1430 Identify the posture three-dimensional data by using a preset imaging evaluation model to determine imaging posture evaluation information of the imaging portion.
  • the posture of the imaged portion can be recognized by the image processing and recognition technique.
  • the two-dimensional data only includes the data of the planar image, and the requirements for the light are high. If the user poses the posture of the imaging part in a dark environment, the accurate imaging posture evaluation information may not be recognized in the acquired image data. , so the accuracy of 2D data is lower.
  • the initial three-dimensional data of the imaging part is determined according to the part depth data, and the initial three-dimensional data is corrected according to the part infrared data to obtain three-dimensional data of the posture of the imaging part.
  • the dark position is identified, and the initial three-dimensional data can be corrected by the infrared data of the part to obtain the complete three-dimensional image of the posture, thereby improving the accuracy of the recognition of the imaging posture evaluation information.
  • FIG. 8 is a schematic flowchart of another method for photographing interaction according to an embodiment of the present disclosure.
  • the method includes the following steps.
  • S1500 Input preset sample data into a preset classifier for training, and obtain an imaging evaluation model.
  • the imaging evaluation model is configured to determine corresponding imaging posture evaluation information according to the captured three-dimensional data of the imaging portion; the preset sample data includes sample three-dimensional data of the imaging portion, and a corresponding sample imaging posture. Evaluation information.
  • the preset sample data may include a plurality of different sample data, and the different sample data corresponds to the acquired sample three-dimensional data and the corresponding sample imaging posture evaluation information; If the imaging part is the head, the preset sample data may be sample three-dimensional data of different document avatar photos and corresponding sample imaging posture evaluation information, and the sample three-dimensional data includes sample three-dimensional data with different imaging posture evaluation information.
  • the corresponding preset sample The data includes sample three-dimensional data of the imaging site, corresponding imaging categories, and corresponding sample imaging posture evaluation information. Since the three-dimensional data of the samples of each imaging category are different, the corresponding imaging standards are also different, so the attitude evaluation information is also different. Therefore, the sample three-dimensional data, the corresponding imaging category and the corresponding sample imaging posture evaluation information are used as preset sample data input value preset classifiers for training, to obtain an imaging evaluation model, which can be identified according to the input posture three-dimensional data, and determined. Corresponding imaging categories, and determining imaging posture evaluation information of the imaging portion according to the imaging category.
  • the preset classifier may be a neural network, and the preset sample data is input into a preset classifier for training, and the preset classifier may extract feature data of the sample three-dimensional data, and the corresponding sample imaging posture is marked by the sample three-dimensional data.
  • the information and/or imaging category are evaluated, so corresponding imaging posture evaluation information and/or imaging categories can be determined based on the extracted feature data.
  • the obtained imaging evaluation model can identify the postures made by the imaging parts of different users, and determine the corresponding imaging posture evaluation information.
  • S1520 Identify the posture three-dimensional data by using a preset imaging evaluation model to determine imaging posture evaluation information of the imaging portion.
  • the obtained imaging evaluation model can perform feature extraction on the three-dimensional data of the posture and classify and determine corresponding imaging posture evaluation information, thereby improving imaging posture evaluation information.
  • the accuracy is improved.
  • FIG. 9 is a structural block diagram of a camera interaction device according to an embodiment of the present disclosure.
  • the device may perform a photo interaction method.
  • the device includes: a three-dimensional data acquisition module 210 configured to capture a user in a recognition camera. Obtaining three-dimensional posture data of the imaging portion by the recognition camera; the evaluation determination module 211 is configured to identify the three-dimensional data of the posture by a preset imaging evaluation model to determine the imaging The imaging posture evaluation information of the part; the prompt determination module 212 is configured to determine the corresponding shooting prompt content according to the imaging posture evaluation information.
  • the photographing interaction device acquires the three-dimensional data of the posture of the imaged portion by using the recognition camera in the case that the image capturing portion of the user is captured by the recognition camera;
  • the posture three-dimensional data is identified to determine imaging posture evaluation information of the imaging portion; and the corresponding shooting prompt content is determined according to the imaging posture evaluation information.
  • the imaging portion includes a user's head; correspondingly, the three-dimensional data acquisition module is configured to acquire three-dimensional data of the posture of the imaging portion by the recognition camera by: determining a posture characteristic of the head And acquiring, by the recognition camera, three-dimensional data of the posture of the posture feature portion of the head.
  • the device further includes: a light source determining module, configured to acquire illumination information of the head in the case that the recognition camera captures the imaged portion of the user, and determine a position of the light source according to the illumination information;
  • the prompt determination module is configured to: determine corresponding shooting prompt content according to the imaging posture evaluation information and the light source position.
  • the evaluation determining module is configured to: identify the three-dimensional data of the posture by a preset imaging evaluation model, determine an imaging category of the imaging portion, and determine the imaging portion according to the imaging category. Imaging posture evaluation information.
  • the imaging posture evaluation information includes an error value of the imaging portion and an imaging standard; correspondingly, the prompt determination module is configured to: determine a shooting prompt content according to the error value; wherein the shooting prompt The content is used to prompt the user to perform a corresponding movement to reduce the error value.
  • the recognition camera is a three-dimensional camera; correspondingly, the three-dimensional data acquisition module is configured to: acquire the depth data of the portion of the imaged portion by the recognition camera, and the infrared data of the portion; The data determines initial three-dimensional data of the imaged portion; the initial three-dimensional data is corrected according to the portion of the infrared data to obtain three-dimensional data of the posture of the imaged portion.
  • the apparatus further includes: a training module, configured to input the preset sample data into the preset classifier for training before the gesture three-dimensional data is recognized by the preset imaging evaluation model, and obtain An imaging evaluation model; wherein the imaging evaluation model is configured to determine corresponding imaging posture evaluation information according to the captured three-dimensional data of the imaging portion; the preset sample data includes sample three-dimensional data of the imaging portion, and corresponding sample imaging Posture evaluation information.
  • a training module configured to input the preset sample data into the preset classifier for training before the gesture three-dimensional data is recognized by the preset imaging evaluation model, and obtain An imaging evaluation model
  • the imaging evaluation model is configured to determine corresponding imaging posture evaluation information according to the captured three-dimensional data of the imaging portion
  • the preset sample data includes sample three-dimensional data of the imaging portion, and corresponding sample imaging Posture evaluation information.
  • the prompt determination module is configured to determine the shooting prompt content corresponding to the imaging posture evaluation information according to the preset mapping table.
  • the shooting hint content includes one of the following: text data, picture data, animation data, and sound data.
  • a storage medium containing computer-executable instructions which is not limited to the photographing interaction operation as described above, and may be related to the photographing interaction method provided by any embodiment of the present application. operating.
  • the embodiment of the present application further provides a storage medium including computer executable instructions, when executed by a computer processor, for performing a photographing interaction method, the method comprising: capturing an imaged portion of a user at a recognition camera And acquiring, by the recognition camera, the three-dimensional data of the posture of the imaging part; identifying the three-dimensional data of the posture by using a preset imaging evaluation model to determine imaging posture evaluation information of the imaging part; The imaging posture evaluation information determines the corresponding shooting prompt content.
  • Storage media any type of storage device or storage device.
  • the term "storage medium” is intended to include: a mounting medium such as a Compact Disc Read-Only Memory (CD-ROM), a floppy disk or a tape device; a computer system memory or a random access memory such as a dynamic random Random Access Memory (DRAM), Double Data Rate Random Access Memory (DDR RAM), Static Random Access Memory (SRAM), Extended Data Output Random Extended Data Output Random Access Memory (EDO RAM), Rambus Random Access Memory (RAM), etc.; non-volatile memory such as flash memory, magnetic media (such as hard disk or light) Storage); registers or other similar types of memory elements, etc.
  • the storage medium may also include other types of memory or multiple types of memory combinations.
  • the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system, the second computer system being coupled to the first computer system via a network, such as the Internet.
  • the second computer system can provide program instructions to the first computer for execution.
  • the term "storage medium" can include two or more storage media that can reside in different locations (eg, in different computer systems connected through a network).
  • a storage medium may store program instructions (eg, program instructions implemented as a computer program) executable by one or more processors.
  • the embodiment of the present application provides a terminal device, where the camera interaction device provided by the embodiment of the present application can be integrated.
  • FIG. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present disclosure.
  • the embodiment of the present application provides a terminal device 30, including a memory 31, a processor 32, and a computer stored in the memory 31 and operable on the processor.
  • the program when the processor executes the computer program, implements the photographing interaction method described in the foregoing embodiment.
  • the terminal device provided by the embodiment of the present application can optimize the photographing operation of the user.
  • FIG. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present disclosure.
  • the terminal device may include: a casing (not shown in FIG. 11), a touch screen (not shown in FIG. 11), a touch button (not shown in FIG. 11), a memory 301, and a central processing unit.
  • CPU Central Processing Unit
  • CPU 302 also referred to as a processor, hereinafter referred to as CPU
  • circuit board not shown in FIG. 11
  • a power supply circuit not shown in FIG. 11
  • the circuit board is disposed inside a space enclosed by the casing; the CPU 302 and the memory 301 are disposed on the circuit board; and the power circuit is configured to be a plurality of circuits or devices of the terminal device
  • the memory 301 is configured to store executable program code; the CPU 302 runs a computer program corresponding to the executable program code by reading executable program code stored in the memory 301 to implement the following steps Obtaining three-dimensional posture data of the imaging portion by the recognition camera in a case where the recognition camera captures an imaging portion of the user; and identifying the three-dimensional image of the posture by a preset imaging evaluation model to determine the imaging The imaging posture evaluation information of the part; determining the corresponding shooting prompt content according to the imaging posture evaluation information.
  • the terminal device further includes: a peripheral interface 303, a radio frequency (RF) circuit 305, an audio circuit 306, a speaker 311, a power management chip 308, an input/output (I/O) subsystem 309, a touch screen 312, and others.
  • Input/control device 310 and external port 304 are communicated via one or more communication buses or signal lines 307.
  • terminal device 300 shown in FIG. 11 is only one example of the terminal device, and the terminal device 300 may have more or fewer components than those shown in FIG. 11, and two or more may be combined. Many components, or can have different component configurations.
  • the various components shown in Figure 11 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the following describes a terminal device for implementing a photo interaction provided by the embodiment, where the terminal device takes a mobile phone as an example.
  • the memory 301 can be accessed by the CPU 302, the peripheral interface 303, etc., and the memory 301 can include a high speed random access memory, and can also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices. Or other volatile solid-state storage devices.
  • a non-volatile memory such as one or more magnetic disk storage devices, flash memory devices. Or other volatile solid-state storage devices.
  • Peripheral interface 303 which can connect the input and output peripherals of the device to CPU 302 and memory 301.
  • I/O subsystem 309 which can connect input and output peripherals on the device, such as touch screen 312 and other input/control devices 310, to peripheral interface 303.
  • I/O subsystem 309 can include display controller 3091 and one or more input controllers 3092 that are configured to control other input/control devices 310.
  • one or more input controllers 3092 receive electrical signals from other input/control devices 310 or transmit electrical signals to other input/control devices 310, and other input/control devices 310 may include physical buttons (press buttons, Rocker button, etc.), dial, slide switch, joystick, click wheel.
  • the input controller 3092 can be connected to any of the following: a keyboard, an infrared port, a Universal Serial Bus (USB) interface, and a pointing device such as a mouse.
  • USB Universal Serial Bus
  • the touch screen 312 is an input interface and an output interface between the user terminal device and the user, and displays the visual output to the user.
  • the visual output may include graphics, text, icons, videos, and the like.
  • Display controller 3091 in I/O subsystem 309 receives an electrical signal from touch screen 312 or an electrical signal to touch screen 312.
  • the touch screen 312 detects the contact on the touch screen, and the display controller 3091 converts the detected contact into an interaction with the user interface object displayed on the touch screen 312, that is, realizes human-computer interaction, and the user interface object displayed on the touch screen 312 can be operated.
  • the device may also include a light mouse, which is a touch sensitive surface that does not display a visual output, or an extension of a touch sensitive surface formed by the touch screen.
  • the RF circuit 305 is mainly configured to establish communication between the mobile phone and the wireless network (ie, the network side), and implement data reception and transmission between the mobile phone and the wireless network. For example, sending and receiving short messages, emails, and the like.
  • RF circuit 305 receives and transmits an RF signal, also referred to as an electromagnetic signal, and RF circuit 305 converts the electrical signal into an electromagnetic signal or converts the electromagnetic signal into an electrical signal, and through the electromagnetic signal and communication network And other devices to communicate.
  • RF circuitry 305 may include known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, CODER-DECoder (CODEC) chipset, Subscriber Identity Module (SIM), etc.
  • an antenna system an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, CODER-DECoder (CODEC) chipset, Subscriber Identity Module (SIM), etc.
  • CODER-DECoder CODER-DECoder
  • SIM Subscriber Identity Module
  • the audio circuit 306 is primarily configured to receive audio data from the peripheral interface 303, convert the audio data into an electrical signal, and transmit the electrical signal to the speaker 311.
  • the speaker 311 is arranged to restore the voice signal received by the handset from the wireless network via the RF circuit 305 to sound and play the sound to the user.
  • the power management chip 308 is configured to provide power and power management for the hardware connected to the CPU 302, the I/O subsystem, and the peripheral interface.
  • the terminal device provided by the embodiment of the present application can optimize the photographing operation of the user.
  • the photographing interaction device, the storage medium, and the terminal device provided in the foregoing embodiments may perform the photographing interaction method provided by any embodiment of the present application, and have the corresponding functional modules and beneficial effects of executing the method.
  • the photographing interaction method provided by any embodiment of the present application may perform the photographing interaction method provided by any embodiment of the present application, and have the corresponding functional modules and beneficial effects of executing the method.
  • the technical details that are not described in the foregoing embodiments refer to the photographing interaction method provided by any embodiment of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Provided in the embodiments of the present application are a photographing interaction method and apparatus, a storage medium and a terminal device. The method comprises: acquiring pose three-dimensional data of the imaging part by means of the recognition camera; recognizing the pose three-dimensional data by means of a pre-set imaging evaluation model so as to determine imaging pose evaluation information of the imaging part; and determining the corresponding photographing prompt content according to the imaging pose evaluation information.

Description

拍照交互方法、装置、存储介质及终端设备Photo interaction method, device, storage medium and terminal device
本公开要求在2018年05月16日提交中国专利局、申请号为201810469542.1的中国专利申请的优先权,该申请的全部内容通过引用结合在本公开中。The present disclosure claims priority to Chinese Patent Application No. 20181046954, filed on May 16, 2018, the entire disclosure of which is incorporated herein by reference.
技术领域Technical field
本申请实施例涉及终端设备技术领域,例如涉及一种拍照交互方法、装置、存储介质及终端设备。The embodiments of the present invention relate to the technical field of terminal devices, for example, to a photo interaction method, device, storage medium, and terminal device.
背景技术Background technique
随着终端设备的拍照技术的发展,大部分的用户都拥有可以拍照的终端设备。与传统的相机不同,通过终端设备进行拍照是非常容易的操作,传统相机进行拍摄时,一般需要进行参数设置或调整镜头焦距的操作,但是通过终端设备拍照,用户只需要在对准拍摄物后按下拍照键就可以完成。With the development of camera technology for terminal devices, most users have terminal devices that can take pictures. Different from the traditional camera, it is very easy to take pictures through the terminal device. When shooting with a traditional camera, it is generally necessary to perform parameter setting or adjust the focal length of the lens. However, the user only needs to take a picture after the camera. Press the camera button to complete.
所以随着终端设备拍照越来越方便,人们对于拍照的态度也会变得比较随意,用户可能会拍摄多张照片后再从中选取满意的照片,一般人们很难直接拍摄到满意的照片,所以需要对终端设备的拍照技术进行优化。Therefore, as the terminal device is more and more convenient to take pictures, people's attitude towards taking pictures will become more casual. The user may take multiple photos and then select satisfactory photos from them. Generally, it is difficult for people to directly take satisfactory photos, so The camera technology of the terminal device needs to be optimized.
发明内容Summary of the invention
本申请实施例提供的一种拍照交互方法、装置、存储介质及终端设备,可以优化用户的拍照操作。The photographing interaction method, device, storage medium and terminal device provided by the embodiments of the present application can optimize the photographing operation of the user.
本申请实施例提供了一种拍照交互方法,包括:An embodiment of the present application provides a method for photographing interaction, including:
在识别摄像头拍摄到用户的成像部位的情况下,通过所述识别摄像头获取所述成像部位的姿态三维数据;Obtaining three-dimensional posture data of the imaging portion by the recognition camera in a case where the recognition camera captures an imaging portion of the user;
通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息;Identifying the posture three-dimensional data by a preset imaging evaluation model to determine imaging posture evaluation information of the imaging portion;
根据所述成像姿态评价信息确定对应的拍摄提示内容。Corresponding photographing prompt content is determined according to the imaging posture evaluation information.
本申请实施例提供了一种拍照交互装置,包括:An embodiment of the present application provides a camera interaction device, including:
三维数据获取模块,设置为在识别摄像头拍摄到用户的成像部位的情况下,通过所述识别摄像头获取所述成像部位的姿态三维数据;a three-dimensional data acquisition module, configured to acquire, by the recognition camera, three-dimensional data of the posture of the imaging portion, in a case where the recognition camera captures an imaging portion of the user;
评价确定模块,设置为通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息;An evaluation determining module configured to identify the posture three-dimensional data by a preset imaging evaluation model to determine imaging posture evaluation information of the imaging portion;
提示确定模块,设置为根据所述成像姿态评价信息确定对应的拍摄提示内容。The prompt determination module is configured to determine a corresponding shooting prompt content according to the imaging posture evaluation information.
本申请实施例提供了一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现如本申请实施例所述的拍照交互方法。The embodiment of the present application provides a computer readable storage medium. The computer readable storage medium stores a computer program, and when the program is executed by the processor, the photo interaction method as described in the embodiment of the present application is implemented.
本申请实施例提供了一种终端设备,包括存储器,处理器及存储在存储器上并可在处理器运行的计算机程序,所述处理器执行所述计算机程序时实现如本申请实施例所述的拍照交互方法。The embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and operable by the processor, and the processor executes the computer program to implement the embodiment of the present application. Photo interaction method.
本申请实施例中提供的一种拍照交互方案,通过在识别摄像头拍摄到用户的成像部位的情况下,通过所述识别摄像头获取所述成像部位的姿态三维数据;通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息;根据所述成像姿态评价信息确定对应的拍摄提示内容。通过采用上述技术方案,可以在拍摄到用户的成像部位的情况下,根据成像部位的三维数据对成像部位的成像效果进行评价,并根据评价信息确定拍摄提示内容以提示用户拍摄到更好的照片。The photographing interaction scheme provided in the embodiment of the present application obtains the three-dimensional data of the posture of the imaged portion by using the recognition camera in the case that the image capturing portion of the user is captured by the recognition camera; The posture three-dimensional data is identified to determine imaging posture evaluation information of the imaging portion; and the corresponding shooting prompt content is determined according to the imaging posture evaluation information. By adopting the above technical solution, the imaging effect of the imaging part can be evaluated according to the three-dimensional data of the imaging part in the case of capturing the imaging part of the user, and the shooting prompt content is determined according to the evaluation information to prompt the user to take a better photo. .
附图说明DRAWINGS
图1为本申请实施例提供的一种拍照交互方法的流程示意图;FIG. 1 is a schematic flowchart of a photo interaction method according to an embodiment of the present application;
图2为本申请实施例提供的一种拍照交互方法的场景示意图;FIG. 2 is a schematic diagram of a scenario of a photo interaction method according to an embodiment of the present disclosure;
图3为本申请实施例提供的另一种拍照交互方法的场景示意图;FIG. 3 is a schematic diagram of another scenario of a photo interaction method according to an embodiment of the present disclosure;
图4为本申请实施例提供的另一种拍照交互方法的流程示意图;FIG. 4 is a schematic flowchart diagram of another method for photographing interaction according to an embodiment of the present application;
图5为本申请实施例提供的另一种拍照交互方法的流程示意图;FIG. 5 is a schematic flowchart of another method for photographing interaction according to an embodiment of the present application;
图6为本申请实施例提供的另一种拍照交互方法的流程示意图;FIG. 6 is a schematic flowchart diagram of another method for photographing interaction according to an embodiment of the present application;
图7为本申请实施例提供的一种初始三维数据的示意图;FIG. 7 is a schematic diagram of initial three-dimensional data according to an embodiment of the present application;
图8为本申请实施例提供的另一种拍照交互方法的流程示意图;FIG. 8 is a schematic flowchart diagram of another method for photographing interaction according to an embodiment of the present application;
图9为本申请实施例提供的一种拍照交互装置的结构框图;FIG. 9 is a structural block diagram of a photo interaction device according to an embodiment of the present application;
图10为本申请实施例提供的一种终端设备的结构示意图;FIG. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present disclosure;
图11为本申请实施例提供的另一种终端设备的结构示意图。FIG. 11 is a schematic structural diagram of another terminal device according to an embodiment of the present disclosure.
具体实施方式Detailed ways
下面结合附图并通过具体实施方式来说明本申请的技术方案。可以理解的 是,此处所描述的具体实施例仅仅用于解释本申请,而非对本申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。The technical solutions of the present application will be described below with reference to the accompanying drawings and specific embodiments. It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting. In addition, it should be noted that, for the convenience of description, only some but not all of the structures related to the present application are shown in the drawings.
在拍照的情况下,由于拍摄角度和拍摄距离的不同,即使拍摄同一个人,拍出来的效果也有很大区别。普通的用户通过终端设备进行拍照时,可能不会考虑拍照的构图和拍摄角度,所以拍摄的照片的效果往往都不太美观。本申请实施例可以对用户的拍照操作提供优化提示,以使用户可以拍摄到更美观的照片。In the case of taking pictures, the effect of shooting is very different even if the same person is photographed due to the difference in shooting angle and shooting distance. When an ordinary user takes a picture through the terminal device, the composition and shooting angle of the photograph may not be considered, so the effect of the photograph taken is often not very beautiful. The embodiment of the present application can provide an optimization prompt for the user's photographing operation, so that the user can take a more beautiful photo.
图1为本申请实施例提供的一种拍照交互方法的流程示意图,该方法可以由拍照交互装置执行,其中该装置可以由软件和/或硬件实现,一般可以集成在终端设备中,也可以集成在其他安装有操作系统的设备中。如图1所示,该方法包括如下步骤。FIG. 1 is a schematic flowchart of a method for photographing interaction according to an embodiment of the present disclosure. The method may be implemented by a photo interaction device, where the device may be implemented by software and/or hardware, and may be integrated into a terminal device or integrated. In other devices with an operating system installed. As shown in FIG. 1, the method includes the following steps.
S1100、在识别摄像头拍摄到用户的成像部位的情况下,通过所述识别摄像头获取所述成像部位的姿态三维数据。In S1100, when the recognition camera captures an imaging portion of the user, the three-dimensional data of the posture of the imaging portion is acquired by the recognition camera.
本实施例中,所述成像部位为通过识别摄像头进行拍摄,所拍摄的画面中包括的用户的身体部位。示例性地,如果用户拍摄的照片是半身像,则所述成像部位包括用户的腰部以上的身体部位。In this embodiment, the image forming portion is a body part of a user included in the captured image by taking a picture by the recognition camera. Illustratively, if the photo taken by the user is a bust, the imaged portion includes a body part above the waist of the user.
所述识别摄像头可以是终端设备上的摄像头,可以是终端设备的前置摄像头,和/或后置摄像头。示例性地,终端设备中一般都设置有至少一个摄像头,一般都包括前置摄像头和后置摄像头。如果被拍摄的用户是终端设备的当前使用者,所述识别摄像头可以是终端设备的前置摄像头,以便用户可以通过终端设备的屏幕了解到拍摄的画面。再如,被拍摄的用户不是终端设备的当前使用者,而是其他用户,则所述识别摄像头可以是终端设备的后置摄像头;当前终端设备的当前使用者通过终端设备来拍摄其他用户的成像部位。The identification camera may be a camera on the terminal device, may be a front camera of the terminal device, and/or a rear camera. Illustratively, at least one camera is generally provided in the terminal device, and generally includes a front camera and a rear camera. If the user being photographed is the current user of the terminal device, the recognition camera may be a front camera of the terminal device, so that the user can know the captured picture through the screen of the terminal device. For another example, if the user being photographed is not the current user of the terminal device, but other users, the identification camera may be a rear camera of the terminal device; the current user of the current terminal device uses the terminal device to capture images of other users. Part.
传统的相机进行拍摄所获取的图像一般是二维数据,即以行列矩阵规则进行排列的像素点的色彩值(红绿蓝(Red Green Blue,RGB)值)或灰度值的集合。相比二维数据,三维数据中还包括拍摄到的成像部位的深度信息,即拍摄的用户的身体部位上的不同空间点与摄像头的距离,所以三维数据可以表示所拍摄的物体的空间信息。所述识别摄像头可以是带有距离传感器的摄像头,距离传感器可以获取所拍摄的物体上的不同空间点与摄像头的距离,如此可以获取到拍摄的成像部位的三维数据。The image acquired by a conventional camera is generally two-dimensional data, that is, a color value (Red Green Blue (RGB) value) or a set of gray values of pixels arranged in a matrix of rows and columns. Compared with the two-dimensional data, the three-dimensional data also includes the depth information of the captured imaging portion, that is, the distance between the different spatial points on the body part of the captured user and the camera, so the three-dimensional data can represent the spatial information of the captured object. The recognition camera may be a camera with a distance sensor, and the distance sensor may acquire the distance between the different spatial points on the captured object and the camera, so that the three-dimensional data of the captured imaging portion can be acquired.
所述姿态三维数据包括:所述识别摄像头拍摄的所述成像部位的三维数据;还可以包括所述成像部位在拍摄画面中的位置数据。The posture three-dimensional data includes: three-dimensional data of the imaging portion captured by the recognition camera; and position data of the imaging portion in the captured image.
所述姿态三维数据可以是成像部位做静止的姿态时,所获取的一组三维数据。姿态三维数据还可以是成像部位做出动态的姿态时,所获取的多组三维数据。The posture three-dimensional data may be a set of three-dimensional data acquired when the imaging portion is in a stationary posture. The attitude three-dimensional data may also be a plurality of sets of three-dimensional data acquired when the imaging part makes a dynamic posture.
在一实施例中,所述成像部位包括用户的头部。相应地,通过所述识别摄像头获取所述成像部位的姿态三维数据包括:确定头部的姿态特征部位,并通过所述识别摄像头获取所述头部的姿态特征部位的姿态三维数据。In an embodiment, the imaging site comprises a user's head. Correspondingly, acquiring the three-dimensional posture data of the imaging portion by the recognition camera comprises: determining a posture feature portion of the head, and acquiring three-dimensional posture three-dimensional data of the posture feature portion of the head by the recognition camera.
在一实施例中,如果成像部位为用户的头部,则表示用户正在通过终端设备进行自拍,或者正在拍摄用户的证件头像照。示例性地,如果用户正在自拍,由于摄像头距离用户较近拍摄角度有限,所以自拍时若要拍到好看的照片,必须选择合适的角度。又如,如果正在拍摄的是用户的证件头像照,证件头像照对于拍照角度的要求更高,如果没有符合标准,则拍摄的证件头像照也无法使用。In an embodiment, if the imaged portion is the user's head, it indicates that the user is taking a self-portrait through the terminal device, or is taking a photo of the user's ID. Illustratively, if the user is taking a self-portrait, since the camera is close to the user, the shooting angle is limited, so if you want to take a good-looking photo during self-timer, you must select a suitable angle. For example, if the user's ID photo is being taken, the ID photo of the ID is more demanding. If the standard is not met, the photo ID of the document will not be used.
由于用户的头部的姿态很难通过整体来判断,需要根据头部的姿态特征部位来进行判断。所述姿态特征部位可以是用于确定用户的头部姿态的特征部位。示例性地,所述姿态特征部位包括两只耳朵,可以根据两只耳朵的位置确定头部的姿态;所述姿态特征部位还可以包括脸部的五官位置,例如眼睛、鼻子和嘴巴的位置等,根据五官位置可以确定头部的姿态。Since the posture of the user's head is difficult to judge by the whole, it is necessary to judge based on the posture feature portion of the head. The posture feature portion may be a feature portion for determining a posture of a user's head. Illustratively, the posture feature portion includes two ears, and the posture of the head may be determined according to the positions of the two ears; the posture feature portion may further include a facial position of the face, such as the position of the eyes, the nose, and the mouth, etc. According to the position of the facial features, the posture of the head can be determined.
所以在成像部位为用户的头部的情况下,通过姿态特征部位的姿态三维数据可以准确确定头部的具体姿态,进一步可以提高成像姿态评价的准确性。Therefore, in the case where the imaging part is the user's head, the three-dimensional data of the posture characteristic part can accurately determine the specific posture of the head, and the accuracy of the imaging posture evaluation can be further improved.
S1110、通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息。S1110: Identify the posture three-dimensional data by using a preset imaging evaluation model to determine imaging posture evaluation information of the imaging portion.
所述姿态三维数据实际上是一组或多组的数据的集合,进一步需要根据数据的集合来分析识别该姿态三维数据实际上对应的是成像部位的具体姿态是什么。The posture three-dimensional data is actually a set of one or more sets of data, and further needs to analyze and recognize the three-dimensional data according to the set of data, which actually corresponds to the specific posture of the imaging part.
所述成像姿态评价信息包括:对识别摄像头所拍摄的用户的成像部位的拍摄画面的评价信息,所述评价信息可以是所述拍摄画面和成像标准的差值,所述成像标准可以是系统预设。示例性地,如果是拍摄用户的证件头像照,则证件头像照的成像标准包括头部位于画面中心位置,以及双眼正视摄像头位置等。The imaging posture evaluation information includes: evaluation information of a photographing screen of an imaging portion of the user captured by the recognition camera, the evaluation information may be a difference between the photographing screen and an imaging standard, and the imaging standard may be a system pre- Assume. Illustratively, if the user's ID avatar is taken, the imaging avatar of the ID avatar includes the head at the center of the screen, the binocular front camera position, and the like.
所述成像评价模型可以是已经训练好的用于根据成像部位的姿态三维数据确定成像姿态评价信息的识别系统,所述成像评价模型可以是预存在终端设备中,或预存在后台服务器中。在需要对姿态三维数据进行识别时,调用预存的成像评价模型来识别姿态三维数据,以确定拍摄的身体部位的成像姿态评价信息。The imaging evaluation model may be an identification system that has been trained to determine imaging posture evaluation information based on the posture three-dimensional data of the imaging site, and the imaging evaluation model may be pre-stored in the terminal device or pre-stored in the background server. When the three-dimensional data of the posture needs to be recognized, the pre-stored imaging evaluation model is called to identify the three-dimensional image of the posture to determine the imaging posture evaluation information of the photographed body part.
S1120、根据所述成像姿态评价信息确定对应的拍摄提示内容。S1120: Determine corresponding shooting prompt content according to the imaging posture evaluation information.
在一实施例中,可以是根据预设映射表确定姿态评价信息对应的拍摄提示内容。In an embodiment, the shooting prompt content corresponding to the posture evaluation information may be determined according to the preset mapping table.
所述拍摄提示内容可以是用于提示用户当前拍摄的拍摄画面的评价信息的提示内容;所述拍摄提示内容还可以是用于提示用户调整拍摄操作的提示内容,所述拍摄操作可以是对识别摄像头的操作,也可以是被拍摄的用户的姿态调整操作。The shooting prompt content may be a prompt content for prompting the user for the evaluation information of the currently captured shooting screen; the shooting prompt content may also be a prompt content for prompting the user to adjust the shooting operation, and the shooting operation may be a pair identification The operation of the camera can also be the gesture adjustment operation of the user being photographed.
示例性地,所述拍摄提示内容可以是通过终端设备的屏幕输出的显示数据,可以是文字数据、图片数据或动画数据等;拍摄提示内容还可以是通过终端设备的扬声器输出的声音数据。拍摄提示内容可以根据系统预设或用户的选择进行设置,还可以是根据实际应用进行设置,在此不作限定。Exemplarily, the shooting prompt content may be display data output through a screen of the terminal device, and may be text data, picture data or animation data, etc.; the shooting prompt content may also be sound data output through a speaker of the terminal device. The content of the shooting prompt can be set according to the system preset or the user's selection, or can be set according to the actual application, which is not limited herein.
用户可以根据拍摄提示内容,进行相应的调整,以使识别摄像头能够拍摄到更好的照片。The user can make adjustments according to the content of the shooting prompt, so that the recognition camera can take a better photo.
在一实施例中,所述成像姿态评价信息包括所述成像部位与成像标准的误差值;所述误差值包括姿态误差值和/或构图误差值。In an embodiment, the imaging posture evaluation information includes an error value of the imaging portion and an imaging standard; the error value includes an attitude error value and/or a composition error value.
相应地,根据所述成像姿态评价信息确定对应的拍摄提示内容包括:根据所述误差值确定拍摄提示内容;其中,所述拍摄提示内容用于提示用户进行相应的移动,以降低所述误差值。Correspondingly, determining the corresponding shooting prompt content according to the imaging posture evaluation information comprises: determining shooting priority content according to the error value; wherein the shooting prompt content is used to prompt the user to perform corresponding movement to reduce the error value .
在一实施例中,提示用户进行相应的移动可以包括:用户控制识别摄像头进行移动和/或用户调整成像部位的移动;用户进行相应移动后,使得误差值降低,相应地成像部位和成像标准的误差值也降低,用户可以获取到更好的照片。In an embodiment, prompting the user to perform the corresponding movement may include: the user controls the recognition camera to move and/or the user adjusts the movement of the imaging portion; after the user performs the corresponding movement, the error value is reduced, correspondingly the imaging portion and the imaging standard The error value is also reduced and the user can get a better photo.
示例性地,如图2所示,当前拍摄的是用户的证件头像照,通过识别摄像头113对用户进行拍摄,通过获取的成像部位的姿态三维数据得到的所述成像姿态评价信息包括:成像部位与成像标准的构图误差值较大。因为在当前拍摄的画面中,镜头角度偏高,导致头部位于画面的中下位置,而证件头像照的成像标准的头部位于画面中心位置,所以图2中成像部位和成像标准的构图误差值较大。Illustratively, as shown in FIG. 2, the currently photographed is a photo of the user's ID, the user is photographed by the recognition camera 113, and the imaging posture evaluation information obtained by acquiring the three-dimensional image of the posture of the imaging portion includes: an imaging portion The composition error value with the imaging standard is large. Because in the currently photographed picture, the lens angle is too high, the head is located in the middle and lower position of the picture, and the head of the imaging standard of the document avatar is located at the center of the picture, so the composition error of the imaging part and the imaging standard in FIG. 2 The value is larger.
对应确定的拍摄提示内容可以包括:提示用户将识别摄像头的拍摄角度调低,或者提示提高用户的头部的高度。如图3所示,用户根据拍摄提示内容进行调整拍照操作后,识别摄像头拍摄的用户的头部位于画面的中心位置,更接近成像标准。从而识别摄像头对调整后的成像部位进行拍摄,可以生成更好的证件头像照。Corresponding to the determined shooting prompt content may include: prompting the user to lower the shooting angle of the recognition camera, or prompting to increase the height of the user's head. As shown in FIG. 3, after the user adjusts the photographing operation according to the content of the photographing prompt, the head of the user who recognizes the photographing of the camera is located at the center of the screen, which is closer to the imaging standard. Therefore, the recognition camera captures the adjusted imaging portion, and can generate a better photo of the ID card.
又如,如果用户此前没有正视识别摄像头,偏向了右边;则获取的成像姿 态三维数据得到的成像评价信息可以包括:成像部位与成像标准的姿态误差值较大。可以通过拍摄提示内容提示用户调整姿势,和/或调整识别摄像头的位置。For another example, if the user does not face the recognition camera and is biased to the right side; the imaging evaluation information obtained by the acquired imaging posture three-dimensional data may include: the posture error value of the imaging portion and the imaging standard is large. The user can be prompted to adjust the gesture by shooting the prompt content, and/or adjust the position of the recognition camera.
本申请实施例中提供的一种拍照交互方法,通过在识别摄像头拍摄到用户的成像部位的情况下,通过所述识别摄像头获取所述成像部位的姿态三维数据;通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息;根据所述成像姿态评价信息确定对应的拍摄提示内容。通过采用上述技术方案,可以在拍摄到用户的成像部位的情况下,根据成像部位的三维数据对成像部位的成像效果进行评价,并根据评价信息确定拍摄提示内容以提示用户拍摄到更好的照片。The photographing interaction method provided in the embodiment of the present application is configured to acquire the three-dimensional data of the posture of the imaged portion by using the recognition camera in the case that the image capturing portion of the user is captured by the recognition camera; The posture three-dimensional data is identified to determine imaging posture evaluation information of the imaging portion; and the corresponding shooting prompt content is determined according to the imaging posture evaluation information. By adopting the above technical solution, the imaging effect of the imaging part can be evaluated according to the three-dimensional data of the imaging part in the case of capturing the imaging part of the user, and the shooting prompt content is determined according to the evaluation information to prompt the user to take a better photo. .
图4为本申请实施例提供的另一种拍照交互方法的流程示意图,在上述实施例所提供的技术方案的基础上,对根据所述成像姿态评价信息确定对应的拍摄提示内容进行了说明。在一实施例中,如图4所示,该方法包括如下步骤。FIG. 4 is a schematic flowchart of another method for photographing interaction according to an embodiment of the present invention. Based on the technical solution provided by the foregoing embodiment, a description is given of determining a corresponding photographing prompt content according to the imaging posture evaluation information. In an embodiment, as shown in FIG. 4, the method includes the following steps.
S1200、在识别摄像头拍摄到用户的头部的情况下,确定头部的姿态特征部位,并通过所述识别摄像头获取所述头部的姿态特征部位的姿态三维数据;以及,获取头部的光照信息,并根据所述光照信息确定光源位置。S1200. Determine, when the recognition camera captures the head of the user, determine a posture feature portion of the head, and acquire, by the recognition camera, three-dimensional data of the posture of the posture feature portion of the head; and acquire the illumination of the head. Information, and determining the position of the light source based on the illumination information.
其中,确定头部的姿态特征部位,并通过所述识别摄像头获取所述头部的姿态特征部位的姿态三维数据的具体实施方式可以参考上文的相关描述,在此不再赘述。For a specific implementation manner of determining the posture feature of the head and acquiring the three-dimensional data of the posture of the posture of the head by the identification camera, reference may be made to the related description above, and details are not described herein again.
在一实施例中,所述头部的光照信息,为所述识别摄像头所拍摄的画面中头部包括的空间点的光照值分布信息;所述光照值可以根据所拍摄的画面的像素值进行确定。示例性地,如果所拍摄的画面中头部的平均光照值较低,低于整体画面的平均光照值,则可以确定当前拍摄角度逆光,拍摄的画面用户的头部的画面可能会过暗。所以需要确定光源位置,进而可以使用户的脸部朝向光源,以获取清楚的拍摄画面。In an embodiment, the illumination information of the head is the illumination value distribution information of the spatial point included in the head in the picture captured by the recognition camera; the illumination value may be performed according to the pixel value of the captured picture. determine. Illustratively, if the average illumination value of the head in the captured picture is lower than the average illumination value of the overall picture, it may be determined that the current shooting angle is backlit, and the picture of the head of the captured picture user may be too dark. Therefore, it is necessary to determine the position of the light source, and thus the user's face can be directed toward the light source to obtain a clear photographing picture.
可以根据头部的空间点的光照值的分布信息确定光源位置;示例性地,如果所述头部的左侧的平均光照值大于右侧的平均光照值,则可以确定光源位于头部的左侧。确定了光源位置之后,可以进一步辅助用户拍摄到更清楚的画面。The position of the light source may be determined according to the distribution information of the illumination values of the spatial points of the head; exemplarily, if the average illumination value of the left side of the head is greater than the average illumination value of the right side, it may be determined that the light source is located at the left of the head side. After the position of the light source is determined, the user can be further assisted in capturing a clearer picture.
S1210、通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息。S1210: Identify the posture three-dimensional data by using a preset imaging evaluation model to determine imaging posture evaluation information of the imaging portion.
具体实施方式可以参考上文的相关描述,在此不再赘述。For details, refer to the related description above, and details are not described herein again.
S1220、根据所述成像姿态评价信息以及光源位置确定对应的拍摄提示内容。S1220. Determine corresponding shooting prompt content according to the imaging posture evaluation information and the light source position.
姿态评价信息包括成像部位在拍摄画面中的姿态的评价,光源位置可以确 定拍摄画面中的成像部位的清晰效果;所以根据成像姿态评价信息以及光源位置可以对识别摄像头所拍摄的拍摄画面进行综合的判断,相应的可以提高对拍摄画面评价的准确性。The posture evaluation information includes an evaluation of the posture of the imaging portion in the photographing screen, and the light source position can determine the clear effect of the imaging portion in the photographing screen; therefore, the photographing image captured by the recognizing camera can be integrated according to the imaging posture evaluation information and the position of the light source. Judging, the corresponding can improve the accuracy of the evaluation of the shooting picture.
示例性地,如果当前拍摄的是用户的证件头像照,得到的所述成像姿态评价信息包括:在当前拍摄的画面中,镜头角度偏高,导致人脸位于画面的中下位置,偏离成像标准。同时用户的头部的平均光照值不均匀,头部左侧的光照平均值高于右侧的光照平均值,则也不符合证件头像照对于脸部光照均匀的要求。根据成像姿态评价信息以及光源位置确定对应的拍摄提示内容,可以进一步提高用户调整姿态或调整摄像头的准确性。Exemplarily, if the current photo is taken by the user's ID card, the obtained imaging posture evaluation information includes: in the currently photographed picture, the lens angle is high, causing the face to be located in the middle and lower position of the screen, deviating from the imaging standard. . At the same time, the average illumination value of the user's head is not uniform, and the average value of the illumination on the left side of the head is higher than the average value of the illumination on the right side, which does not meet the requirement of uniform illumination of the face of the document. According to the imaging posture evaluation information and the position of the light source to determine the corresponding shooting prompt content, the user can further improve the posture adjustment or adjust the accuracy of the camera.
本申请实施例通过根据成像姿态评价信息以及光源位置确定对应的拍摄提示内容,可以提高对识别摄像头的拍摄画面评价的准确性,进一步提高拍摄提示内容的准确性。In the embodiment of the present application, by determining the corresponding shooting prompt content according to the imaging posture evaluation information and the light source position, the accuracy of the evaluation of the captured image of the recognition camera can be improved, and the accuracy of the shooting prompt content can be further improved.
图5为本申请实施例提供的另一种拍照交互方法的流程示意图,在上述任意实施例所提供的技术方案的基础上,对通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息的操作进行了说明。在一实施例中,如图5所示,该方法包括如下步骤。FIG. 5 is a schematic flowchart of another method for photographing interaction according to an embodiment of the present disclosure. On the basis of the technical solution provided by any of the foregoing embodiments, the three-dimensional data of the gesture is identified by using a preset imaging evaluation model. The operation of determining the imaging posture evaluation information of the imaging site has been described. In an embodiment, as shown in FIG. 5, the method includes the following steps.
S1300、在识别摄像头拍摄到用户的成像部位的情况下,通过所述识别摄像头获取所述成像部位的姿态三维数据。S1300. Acquire the three-dimensional posture data of the imaging part by the recognition camera when the recognition camera captures the imaging part of the user.
具体实施方式可以参考上文的相关描述,在此不再赘述。For details, refer to the related description above, and details are not described herein again.
S1310、通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像类别,以及根据所述成像类别确定所述成像部位的成像姿态评价信息。S1310: Identify the posture three-dimensional data by using a preset imaging evaluation model to determine an imaging category of the imaging portion, and determine imaging posture evaluation information of the imaging portion according to the imaging category.
本实施例中,照片的类型包括多种,例如,人像照片包括头像照、本身照、全身照和合照等等,不同类别的照片的成像标准也不一样。示例性地、头像照对于头部的角度和姿态要求较高,而半身照和全身照则对拍照画面构图要求较高。所以在用户通过识别摄像头进行拍照的情况下,可以首先根据成像部位的姿态三维数据确定成像类别,进而可以根据成像类别确定对应的成像标准,如此可以提高所述成像部位的成像姿态评价信息的准确性。In this embodiment, the types of photos include various types, for example, portrait photos include avatar photos, self photos, full body photos and photos, and the like, and different types of photos have different imaging standards. Illustratively, the avatar requires a higher angle and posture for the head, while the half-length and full-body photos require a higher picture composition. Therefore, in the case that the user performs photographing by recognizing the camera, the imaging category may be first determined according to the three-dimensional data of the posture of the imaging portion, and then the corresponding imaging standard may be determined according to the imaging category, so that the imaging posture evaluation information of the imaging portion may be improved. Sex.
S1320、根据所述成像姿态评价信息确定对应的拍摄提示内容。S1320. Determine corresponding shooting prompt content according to the imaging posture evaluation information.
具体实施方式可以参考上文的相关描述,在此不再赘述。For details, refer to the related description above, and details are not described herein again.
本申请实施例通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像类别,以及根据所述成像类别确定所述成像部位的成像姿态评价信息,通过首先确定成像类别,从而选择和成像类别对应的成像标 准,提高成像姿态评价信息的评价的准确性,进一步可以提高用户的拍照的操作效率。The embodiment of the present application identifies the three-dimensional data of the posture by using a preset imaging evaluation model to determine an imaging category of the imaging portion, and determines imaging posture evaluation information of the imaging portion according to the imaging category, by first determining The imaging category, thereby selecting an imaging standard corresponding to the imaging category, improves the accuracy of the evaluation of the imaging posture evaluation information, and further improves the operation efficiency of the user's photographing.
图6为本申请实施例提供的另一种拍照交互方法的流程示意图,在上述任意实施例所提供的技术方案的基础上,对通过所述识别摄像头获取所述成像部位的姿态三维数据的操作进行了说明。在一实施例中,如图6所示,该方法包括如下步骤。FIG. 6 is a schematic flowchart of another method for photographing interaction according to an embodiment of the present disclosure. On the basis of the technical solution provided by any of the foregoing embodiments, an operation of acquiring three-dimensional data of the posture of the imaged portion by using the identification camera is performed. It was explained. In an embodiment, as shown in FIG. 6, the method includes the following steps.
S1400、在识别摄像头拍摄到用户的成像部位的情况下,通过所述识别摄像头获取所述成像部位的部位深度数据,以及部位红外数据。S1400. When the recognition camera captures an imaged portion of the user, the part depth data of the imaged portion and the part infrared data are acquired by the recognition camera.
所述识别摄像头为三维(3D,Three Dimensional)摄像头,三维摄像头中包括多种硬件结构,可包括:红外传感器、距离传感器和镜头等。The recognition camera is a three-dimensional (3D, three Dimensional) camera, and the three-dimensional camera includes various hardware structures, and may include: an infrared sensor, a distance sensor, a lens, and the like.
所述部位深度数据为成像部位所包括的空间点距离识别摄像头的距离值的集合;可以通过识别摄像头中的距离传感器获取成像部位的部位深度数据。The part depth data is a set of distance values of the spatial point included in the imaging part from the recognition camera; the part depth data of the imaging part can be acquired by identifying the distance sensor in the camera.
所述部位红外数据为成像部位所包括的空间点反射的红外数据的集合。其中,三维摄像头在拍摄的情况下,红外传感器发射红外信号至成像部位,成像部位会对红外信息进行反射,红外传感器根据接收到的反射的红外数据可以实现成像部位的成像。The portion infrared data is a collection of infrared data reflected by a spatial point included in the imaging site. Wherein, in the case of the three-dimensional camera, the infrared sensor emits an infrared signal to the imaging site, and the imaging portion reflects the infrared information, and the infrared sensor can image the imaging portion according to the received infrared data.
S1410、根据所述部位深度数据确定所述成像部位的初始三维数据。S1410. Determine initial three-dimensional data of the imaging part according to the part depth data.
其中,部位深度数据中包括了成像部位所包括的空间点的距离值,所以可以根据部位深度数据确定成像部位的初始三维数据。示例性地,如图7所示,图7中点a、b、c和d为四个空间点,X、Y和Z轴表示空间,其中Z轴表示空间点的深度数据,X和Y轴表示空间点的平面位置坐标。其中点a的深度数据最大,也就是点a距离识别摄像头的距离最远,从图7中可以看到根据四个空间点的平面坐标和深度数据可以形成一个三维的椎体,从而根据多个空间点的部位深度数据以及空间点的平面坐标可以确定初始的三维数据。Wherein, the location depth data includes the distance value of the spatial point included in the imaging part, so the initial three-dimensional data of the imaging part can be determined according to the part depth data. Illustratively, as shown in FIG. 7, points a, b, c, and d in FIG. 7 are four spatial points, and X, Y, and Z axes represent spaces, wherein the Z axis represents depth data of spatial points, X and Y axes Indicates the coordinates of the plane position of the spatial point. The depth data of point a is the largest, that is, the point a is the farthest distance from the recognition camera. It can be seen from Fig. 7 that a three-dimensional vertebral body can be formed according to the plane coordinates and depth data of the four spatial points, thereby The depth data of the location of the spatial point and the planar coordinates of the spatial point can determine the initial three-dimensional data.
但是如果成像部位的某些细节处被遮挡或者发生数据丢失的情况,则初始三维数据中对应的细节位置会出现数据缺失的问题,所以进一步需要根据部位红外数据对初始三维数据进行校正。However, if some details of the imaged portion are occluded or data loss occurs, the corresponding detail position in the initial three-dimensional data may cause data loss, so it is further necessary to correct the initial three-dimensional data according to the part infrared data.
S1420、根据所述部位红外数据对所述初始三维数据进行校正,以得到所述成像部位的姿态三维数据。S1420: Correct the initial three-dimensional data according to the part infrared data to obtain three-dimensional posture data of the imaging part.
本实施例中,对于成像部位所包括的空间点,每个空间点的深度数据和红外数据一一对应。对于数据缺失的空间点的深度数据,根据该空间点的深度数据对应的红外数据可以对整体的初始三维数据进行衡量和比对,进而对缺失的空间点进行特征补全。红外信号是一种电磁波,人眼无法看到红外信号,但是 如果在夜晚或者环境较暗没有可见光的时候,红外光依然可以进行传播,所以在较暗的环境中,根据红外数据也可以生成较清晰的成像;进而可以根据部位红外数据来对初始三维数据进行校正。In this embodiment, for the spatial points included in the imaging portion, the depth data of each spatial point and the infrared data are in one-to-one correspondence. For the depth data of the spatial point where the data is missing, the infrared data corresponding to the depth data of the spatial point can measure and compare the overall initial three-dimensional data, and then perform feature complementation on the missing spatial points. The infrared signal is an electromagnetic wave, and the human eye cannot see the infrared signal. However, if the infrared light can still propagate at night or when the environment is dark and there is no visible light, in a dark environment, the infrared data can also be generated. Clear imaging; in turn, the initial 3D data can be corrected based on the location infrared data.
在一实施例中,可以根据相邻点的深度数据和红外数据建立拟合关系函数,并根据拟合关系函数以及缺失空间点的部位红外数据计算对应的深度数据,进而得到校正后的姿态三维数据;其中,缺失空间点为深度数据缺失的空间点,相邻空间点为缺失空间点的相邻的空间点。In an embodiment, the fitting relationship function may be established according to the depth data of the adjacent points and the infrared data, and the corresponding depth data is calculated according to the fitting relationship function and the infrared data of the missing space point, thereby obtaining the corrected posture three-dimensional. Data; wherein the missing spatial point is a spatial point where the depth data is missing, and the adjacent spatial point is an adjacent spatial point of the missing spatial point.
S1430、通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息。S1430: Identify the posture three-dimensional data by using a preset imaging evaluation model to determine imaging posture evaluation information of the imaging portion.
S1440、根据所述成像姿态评价信息确定对应的拍摄提示内容。S1440. Determine corresponding shooting prompt content according to the imaging posture evaluation information.
上述操作的具体实施方式可以参考上文的相关描述,在此不再赘述。For specific implementations of the foregoing operations, reference may be made to the related description above, and details are not described herein again.
虽然通过普通摄像头来拍摄成像部位的图像,即获取的成像部位的姿态二维数据,通过图像处理识别技术也可以识别成像部位所作的姿态。但是二维数据仅包括平面图像的数据,对于光线的要求较高,如果用户在较暗的环境中摆出成像部位的姿态,所获取的平面图像的数据中可能无法识别准确的成像姿态评价信息,所以二维数据的准确性较低。Although the image of the imaged portion, that is, the acquired two-dimensional image of the imaged portion, is captured by an ordinary camera, the posture of the imaged portion can be recognized by the image processing and recognition technique. However, the two-dimensional data only includes the data of the planar image, and the requirements for the light are high. If the user poses the posture of the imaging part in a dark environment, the accurate imaging posture evaluation information may not be recognized in the acquired image data. , so the accuracy of 2D data is lower.
本申请实施例通过根据所述部位深度数据确定所述成像部位的初始三维数据,根据所述部位红外数据对所述初始三维数据进行校正,以得到所述成像部位的姿态三维数据,在光线较暗的位置进行识别,也能通过部位红外数据对初始三维数据进行校正,得到完整的姿态三维数据,进而可以提高成像姿态评价信息的识别的精确性。In the embodiment of the present application, the initial three-dimensional data of the imaging part is determined according to the part depth data, and the initial three-dimensional data is corrected according to the part infrared data to obtain three-dimensional data of the posture of the imaging part. The dark position is identified, and the initial three-dimensional data can be corrected by the infrared data of the part to obtain the complete three-dimensional image of the posture, thereby improving the accuracy of the recognition of the imaging posture evaluation information.
图8为本申请实施例提供的另一种拍照交互方法的流程示意图,在上述任意实施例所提供的技术方案的基础上,如图8所示,该方法包括如下步骤。FIG. 8 is a schematic flowchart of another method for photographing interaction according to an embodiment of the present disclosure. On the basis of the technical solution provided by any of the foregoing embodiments, as shown in FIG. 8 , the method includes the following steps.
S1500、将预设样本数据输入至预设分类器中进行训练,得到成像评价模型。S1500: Input preset sample data into a preset classifier for training, and obtain an imaging evaluation model.
本实施例中,所述成像评价模型用于根据拍摄到的成像部位的姿态三维数据确定对应的成像姿态评价信息;所述预设样本数据包括成像部位的样本三维数据,以及对应的样本成像姿态评价信息。In this embodiment, the imaging evaluation model is configured to determine corresponding imaging posture evaluation information according to the captured three-dimensional data of the imaging portion; the preset sample data includes sample three-dimensional data of the imaging portion, and a corresponding sample imaging posture. Evaluation information.
在一实施例中,所述预设样本数据可以包括多个不同的样本数据,不同的样本数据对应为获取的不同成像部位所对应的样本三维数据和对应的样本成像姿态评价信息;示例性地,如果成像部位是头部,预设样本数据可以是不同的证件头像照的样本三维数据以及对应的样本成像姿态评价信息,样本三维数据中包括成像姿态评价信息不同的样本三维数据。In an embodiment, the preset sample data may include a plurality of different sample data, and the different sample data corresponds to the acquired sample three-dimensional data and the corresponding sample imaging posture evaluation information; If the imaging part is the head, the preset sample data may be sample three-dimensional data of different document avatar photos and corresponding sample imaging posture evaluation information, and the sample three-dimensional data includes sample three-dimensional data with different imaging posture evaluation information.
如果通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像类别,以及根据所述成像类别确定所述成像部位的成像姿态评价信息,则对应的预设样本数据包括成像部位的样本三维数据、对应的成像类别以及对应的样本成像姿态评价信息。由于每个成像类别的样本三维数据有所不同,对应的成像标准也有所不同,所以姿态评价信息也不同。所以将样本三维数据、对应的成像类别以及对应的样本成像姿态评价信息作为预设样本数据输入值预设分类器中进行训练,以得到成像评价模型,可以根据输入的姿态三维数据进行识别,确定对应的成像类别,并根据成像类别确定所述成像部位的成像姿态评价信息。If the posture three-dimensional data is identified by a preset imaging evaluation model to determine an imaging category of the imaging portion, and imaging posture evaluation information of the imaging portion is determined according to the imaging category, the corresponding preset sample The data includes sample three-dimensional data of the imaging site, corresponding imaging categories, and corresponding sample imaging posture evaluation information. Since the three-dimensional data of the samples of each imaging category are different, the corresponding imaging standards are also different, so the attitude evaluation information is also different. Therefore, the sample three-dimensional data, the corresponding imaging category and the corresponding sample imaging posture evaluation information are used as preset sample data input value preset classifiers for training, to obtain an imaging evaluation model, which can be identified according to the input posture three-dimensional data, and determined. Corresponding imaging categories, and determining imaging posture evaluation information of the imaging portion according to the imaging category.
所述预设分类器可以是神经网络,将预设样本数据输入至预设分类器中进行训练,预设分类器可以提取样本三维数据的特征数据,由于样本三维数据标注了对应的样本成像姿态评价信息和/或成像类别,所以可以根据提取的特征数据确定对应的成像姿态评价信息和/或成像类别。The preset classifier may be a neural network, and the preset sample data is input into a preset classifier for training, and the preset classifier may extract feature data of the sample three-dimensional data, and the corresponding sample imaging posture is marked by the sample three-dimensional data. The information and/or imaging category are evaluated, so corresponding imaging posture evaluation information and/or imaging categories can be determined based on the extracted feature data.
通过预设样本数据对预设分类器进行训练后,得到的成像评价模型可以对不同的用户的成像部位做出的姿态进行识别,并确定对应的成像姿态评价信息。After the preset classifier is trained by the preset sample data, the obtained imaging evaluation model can identify the postures made by the imaging parts of different users, and determine the corresponding imaging posture evaluation information.
S1510、在识别摄像头拍摄到用户的成像部位的情况下,通过所述识别摄像头获取所述成像部位的姿态三维数据。S1510. Acquire the three-dimensional posture data of the imaging part by the recognition camera when the recognition camera captures the imaging part of the user.
S1520、通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息。S1520: Identify the posture three-dimensional data by using a preset imaging evaluation model to determine imaging posture evaluation information of the imaging portion.
S1530、根据所述成像姿态评价信息确定对应的拍摄提示内容。S1530. Determine corresponding shooting prompt content according to the imaging posture evaluation information.
本申请实施例通过将预设样本数据输入至预设分类器中进行训练,得到的成像评价模型,可以对姿态三维数据进行特征提取并进行分类确定对应的成像姿态评价信息,提高成像姿态评价信息的准确性。In the embodiment of the present application, by inputting preset sample data into a preset classifier for training, the obtained imaging evaluation model can perform feature extraction on the three-dimensional data of the posture and classify and determine corresponding imaging posture evaluation information, thereby improving imaging posture evaluation information. The accuracy.
图9为本申请实施例提供的一种拍照交互装置的结构框图,该装置可以执行拍照交互方法,如图9所示,该装置包括:三维数据获取模块210,设置为在识别摄像头拍摄到用户的成像部位的情况下,通过所述识别摄像头获取所述成像部位的姿态三维数据;评价确定模块211,设置为通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息;提示确定模块212,设置为根据所述成像姿态评价信息确定对应的拍摄提示内容。FIG. 9 is a structural block diagram of a camera interaction device according to an embodiment of the present disclosure. The device may perform a photo interaction method. As shown in FIG. 9 , the device includes: a three-dimensional data acquisition module 210 configured to capture a user in a recognition camera. Obtaining three-dimensional posture data of the imaging portion by the recognition camera; the evaluation determination module 211 is configured to identify the three-dimensional data of the posture by a preset imaging evaluation model to determine the imaging The imaging posture evaluation information of the part; the prompt determination module 212 is configured to determine the corresponding shooting prompt content according to the imaging posture evaluation information.
本申请实施例中提供的一种拍照交互装置,通过在识别摄像头拍摄到用户的成像部位的情况下,通过所述识别摄像头获取所述成像部位的姿态三维数据;通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息;根据所述成像姿态评价信息确定对应的拍摄提示内容。 通过采用上述技术方案,可以在拍摄到用户的成像部位的情况下,根据成像部位的三维数据对成像部位的成像效果进行评价,并根据评价信息确定拍摄提示内容以提示用户拍摄到更好的照片。The photographing interaction device provided in the embodiment of the present application acquires the three-dimensional data of the posture of the imaged portion by using the recognition camera in the case that the image capturing portion of the user is captured by the recognition camera; The posture three-dimensional data is identified to determine imaging posture evaluation information of the imaging portion; and the corresponding shooting prompt content is determined according to the imaging posture evaluation information. By adopting the above technical solution, the imaging effect of the imaging part can be evaluated according to the three-dimensional data of the imaging part in the case of capturing the imaging part of the user, and the shooting prompt content is determined according to the evaluation information to prompt the user to take a better photo. .
在一实施例中,所述成像部位包括用户的头部;相应地,三维数据获取模块是设置为通过如下方式通过所述识别摄像头获取所述成像部位的姿态三维数据:确定头部的姿态特征部位,并通过所述识别摄像头获取所述头部的姿态特征部位的姿态三维数据。In an embodiment, the imaging portion includes a user's head; correspondingly, the three-dimensional data acquisition module is configured to acquire three-dimensional data of the posture of the imaging portion by the recognition camera by: determining a posture characteristic of the head And acquiring, by the recognition camera, three-dimensional data of the posture of the posture feature portion of the head.
在一实施例中,上述装置还包括:光源确定模块,设置为在识别摄像头拍摄到用户的成像部位的情况下,获取头部的光照信息,并根据所述光照信息确定光源位置;相应地,提示确定模块是设置为:根据所述成像姿态评价信息以及光源位置确定对应的拍摄提示内容。In an embodiment, the device further includes: a light source determining module, configured to acquire illumination information of the head in the case that the recognition camera captures the imaged portion of the user, and determine a position of the light source according to the illumination information; The prompt determination module is configured to: determine corresponding shooting prompt content according to the imaging posture evaluation information and the light source position.
在一实施例中,评价确定模块是设置为:通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像类别,以及根据所述成像类别确定所述成像部位的成像姿态评价信息。In an embodiment, the evaluation determining module is configured to: identify the three-dimensional data of the posture by a preset imaging evaluation model, determine an imaging category of the imaging portion, and determine the imaging portion according to the imaging category. Imaging posture evaluation information.
在一实施例中,所述成像姿态评价信息包括所述成像部位与成像标准的误差值;相应地,提示确定模块是设置为:根据所述误差值确定拍摄提示内容;其中,所述拍摄提示内容用于提示用户进行相应的移动,以降低所述误差值。In an embodiment, the imaging posture evaluation information includes an error value of the imaging portion and an imaging standard; correspondingly, the prompt determination module is configured to: determine a shooting prompt content according to the error value; wherein the shooting prompt The content is used to prompt the user to perform a corresponding movement to reduce the error value.
在一实施例中,所述识别摄像头为三维摄像头;相应地,三维数据获取模块是设置为:通过所述识别摄像头获取所述成像部位的部位深度数据,以及部位红外数据;根据所述部位深度数据确定所述成像部位的初始三维数据;根据所述部位红外数据对所述初始三维数据进行校正,以得到所述成像部位的姿态三维数据。In an embodiment, the recognition camera is a three-dimensional camera; correspondingly, the three-dimensional data acquisition module is configured to: acquire the depth data of the portion of the imaged portion by the recognition camera, and the infrared data of the portion; The data determines initial three-dimensional data of the imaged portion; the initial three-dimensional data is corrected according to the portion of the infrared data to obtain three-dimensional data of the posture of the imaged portion.
在一实施例中,上述装置还包括:训练模块,设置为在通过预设的成像评价模型对所述姿态三维数据进行识别之前,将预设样本数据输入至预设分类器中进行训练,得到成像评价模型;其中,所述成像评价模型用于根据拍摄到的成像部位的姿态三维数据确定对应的成像姿态评价信息;所述预设样本数据包括成像部位的样本三维数据,以及对应的样本成像姿态评价信息。In an embodiment, the apparatus further includes: a training module, configured to input the preset sample data into the preset classifier for training before the gesture three-dimensional data is recognized by the preset imaging evaluation model, and obtain An imaging evaluation model; wherein the imaging evaluation model is configured to determine corresponding imaging posture evaluation information according to the captured three-dimensional data of the imaging portion; the preset sample data includes sample three-dimensional data of the imaging portion, and corresponding sample imaging Posture evaluation information.
在一实施例中,提示确定模块是设置为根据预设映射表确定所述成像姿态评价信息对应的拍摄提示内容。In an embodiment, the prompt determination module is configured to determine the shooting prompt content corresponding to the imaging posture evaluation information according to the preset mapping table.
在一实施例中,拍摄提示内容包括以下之一:文字数据、图片数据、动画数据和声音数据。In an embodiment, the shooting hint content includes one of the following: text data, picture data, animation data, and sound data.
本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的拍照交互操作,还可以执行本申请任意实施例所 提供的拍照交互方法中的相关操作。A storage medium containing computer-executable instructions, which is not limited to the photographing interaction operation as described above, and may be related to the photographing interaction method provided by any embodiment of the present application. operating.
本申请实施例还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行拍照交互方法,该方法包括:在识别摄像头拍摄到用户的成像部位的情况下,通过所述识别摄像头获取所述成像部位的姿态三维数据;通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息;根据所述成像姿态评价信息确定对应的拍摄提示内容。The embodiment of the present application further provides a storage medium including computer executable instructions, when executed by a computer processor, for performing a photographing interaction method, the method comprising: capturing an imaged portion of a user at a recognition camera And acquiring, by the recognition camera, the three-dimensional data of the posture of the imaging part; identifying the three-dimensional data of the posture by using a preset imaging evaluation model to determine imaging posture evaluation information of the imaging part; The imaging posture evaluation information determines the corresponding shooting prompt content.
存储介质——任何类型的存储器设备或存储设备。术语“存储介质”旨在包括:安装介质,例如紧凑型光盘只读储存器(Compact Disc Read-Only Memory,CD-ROM)、软盘或磁带装置;计算机系统存储器或随机存取存储器,诸如动态随机存取存储器(Dynamic Random Access Memory,DRAM)、双倍数据速率随机存取存储器(Double Data Rate Random Access Memory,DDR RAM)、静态随机存取存储器(Static Random Access Memory,SRAM)、扩展数据输出随机存取存储器(Extended Data Output Random Access Memory,EDO RAM),兰巴斯(Rambus)随机存取存储器(Random Access Memory,RAM)等;非易失性存储器,诸如闪存、磁介质(例如硬盘或光存储);寄存器或其它相似类型的存储器元件等。存储介质可以还包括其它类型的存储器或多种类型的存储器组合。另外,存储介质可以位于程序在其中被执行的第一计算机系统中,或者可以位于不同的第二计算机系统中,第二计算机系统通过网络(诸如因特网)连接到第一计算机系统。第二计算机系统可以提供程序指令给第一计算机用于执行。术语“存储介质”可以包括可以驻留在不同部位中(例如在通过网络连接的不同计算机系统中)的两个或更多存储介质。存储介质可以存储可由一个或多个处理器执行的程序指令(例如程序指令实现为计算机程序)。Storage media - any type of storage device or storage device. The term "storage medium" is intended to include: a mounting medium such as a Compact Disc Read-Only Memory (CD-ROM), a floppy disk or a tape device; a computer system memory or a random access memory such as a dynamic random Random Random Access Memory (DRAM), Double Data Rate Random Access Memory (DDR RAM), Static Random Access Memory (SRAM), Extended Data Output Random Extended Data Output Random Access Memory (EDO RAM), Rambus Random Access Memory (RAM), etc.; non-volatile memory such as flash memory, magnetic media (such as hard disk or light) Storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or multiple types of memory combinations. Additionally, the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system, the second computer system being coupled to the first computer system via a network, such as the Internet. The second computer system can provide program instructions to the first computer for execution. The term "storage medium" can include two or more storage media that can reside in different locations (eg, in different computer systems connected through a network). A storage medium may store program instructions (eg, program instructions implemented as a computer program) executable by one or more processors.
本申请实施例提供了一种终端设备,该终端设备中可集成本申请实施例提供的拍照交互装置。The embodiment of the present application provides a terminal device, where the camera interaction device provided by the embodiment of the present application can be integrated.
图10为本申请实施例提供的一种终端设备的结构示意图,本申请实施例提供了一种终端设备30,包括存储器31,处理器32及存储在存储器31上并可在处理器运行的计算机程序,所述处理器执行所述计算机程序时实现上述实施例所述的拍照交互方法。本申请实施例提供的终端设备,可以优化用户的拍照操作。FIG. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present disclosure. The embodiment of the present application provides a terminal device 30, including a memory 31, a processor 32, and a computer stored in the memory 31 and operable on the processor. The program, when the processor executes the computer program, implements the photographing interaction method described in the foregoing embodiment. The terminal device provided by the embodiment of the present application can optimize the photographing operation of the user.
图11为本申请实施例提供的一种终端设备的结构示意图。如图11所示,该终端设备可以包括:壳体(图11中未示出)、触摸屏(图11中未示出)、触摸按键(图11中未示出)、存储器301、中央处理器(Central Processing Unit,CPU)302(又称处理器,以下简称CPU)、电路板(图11中未示出)和电源 电路(图11中未示出)。所述电路板安置在所述壳体围成的空间内部;所述CPU302和所述存储器301设置在所述电路板上;所述电源电路,设置为为所述终端设备的多个电路或器件供电;所述存储器301,设置为存储可执行程序代码;所述CPU302通过读取所述存储器301中存储的可执行程序代码来运行与所述可执行程序代码对应的计算机程序,以实现以下步骤:在识别摄像头拍摄到用户的成像部位的情况下,通过所述识别摄像头获取所述成像部位的姿态三维数据;通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息;根据所述成像姿态评价信息确定对应的拍摄提示内容。FIG. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present disclosure. As shown in FIG. 11, the terminal device may include: a casing (not shown in FIG. 11), a touch screen (not shown in FIG. 11), a touch button (not shown in FIG. 11), a memory 301, and a central processing unit. (Central Processing Unit, CPU) 302 (also referred to as a processor, hereinafter referred to as CPU), a circuit board (not shown in FIG. 11), and a power supply circuit (not shown in FIG. 11). The circuit board is disposed inside a space enclosed by the casing; the CPU 302 and the memory 301 are disposed on the circuit board; and the power circuit is configured to be a plurality of circuits or devices of the terminal device The memory 301 is configured to store executable program code; the CPU 302 runs a computer program corresponding to the executable program code by reading executable program code stored in the memory 301 to implement the following steps Obtaining three-dimensional posture data of the imaging portion by the recognition camera in a case where the recognition camera captures an imaging portion of the user; and identifying the three-dimensional image of the posture by a preset imaging evaluation model to determine the imaging The imaging posture evaluation information of the part; determining the corresponding shooting prompt content according to the imaging posture evaluation information.
所述终端设备还包括:外设接口303、射频(Radio Frequency,RF)电路305、音频电路306、扬声器311、电源管理芯片308、输入/输出(I/O)子系统309、触摸屏312、其他输入/控制设备310以及外部端口304,这些部件通过一个或多个通信总线或信号线307来通信。The terminal device further includes: a peripheral interface 303, a radio frequency (RF) circuit 305, an audio circuit 306, a speaker 311, a power management chip 308, an input/output (I/O) subsystem 309, a touch screen 312, and others. Input/control device 310 and external port 304 are communicated via one or more communication buses or signal lines 307.
应该理解的是,图11所示的终端设备300仅仅是终端设备的一个范例,并且终端设备300可以具有比图11中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图11中所示出的多种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。It should be understood that the terminal device 300 shown in FIG. 11 is only one example of the terminal device, and the terminal device 300 may have more or fewer components than those shown in FIG. 11, and two or more may be combined. Many components, or can have different component configurations. The various components shown in Figure 11 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
下面就本实施例提供的用于实现拍照交互的终端设备进行描述,该终端设备以手机为例。The following describes a terminal device for implementing a photo interaction provided by the embodiment, where the terminal device takes a mobile phone as an example.
存储器301,所述存储器301可以被CPU302、外设接口303等访问,所述存储器301可以包括高速随机存取存储器,还可以包括非易失性存储器,例如一个或多个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The memory 301 can be accessed by the CPU 302, the peripheral interface 303, etc., and the memory 301 can include a high speed random access memory, and can also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices. Or other volatile solid-state storage devices.
外设接口303,所述外设接口303可以将设备的输入和输出外设连接到CPU302和存储器301。 Peripheral interface 303, which can connect the input and output peripherals of the device to CPU 302 and memory 301.
I/O子系统309,所述I/O子系统309可以将设备上的输入输出外设,例如触摸屏312和其他输入/控制设备310,连接到外设接口303。I/O子系统309可以包括显示控制器3091和设置为控制其他输入/控制设备310的一个或多个输入控制器3092。在一实施例中,一个或多个输入控制器3092从其他输入/控制设备310接收电信号或者向其他输入/控制设备310发送电信号,其他输入/控制设备310可以包括物理按钮(按压按钮、摇臂按钮等)、拨号盘、滑动开关、操纵杆、点击滚轮。在一实施例中,输入控制器3092可以与以下任一个连接:键盘、红外端口、通用串行总线(Universal Serial Bus,USB)接口以及诸如鼠标的指示设备。I/O subsystem 309, which can connect input and output peripherals on the device, such as touch screen 312 and other input/control devices 310, to peripheral interface 303. I/O subsystem 309 can include display controller 3091 and one or more input controllers 3092 that are configured to control other input/control devices 310. In one embodiment, one or more input controllers 3092 receive electrical signals from other input/control devices 310 or transmit electrical signals to other input/control devices 310, and other input/control devices 310 may include physical buttons (press buttons, Rocker button, etc.), dial, slide switch, joystick, click wheel. In an embodiment, the input controller 3092 can be connected to any of the following: a keyboard, an infrared port, a Universal Serial Bus (USB) interface, and a pointing device such as a mouse.
触摸屏312,所述触摸屏312是用户终端设备与用户之间的输入接口和输出接口,将可视输出显示给用户,可视输出可以包括图形、文本、图标、视频等。The touch screen 312 is an input interface and an output interface between the user terminal device and the user, and displays the visual output to the user. The visual output may include graphics, text, icons, videos, and the like.
I/O子系统309中的显示控制器3091从触摸屏312接收电信号或者向触摸屏312发送电信号。触摸屏312检测触摸屏上的接触,显示控制器3091将检测到的接触转换为与显示在触摸屏312上的用户界面对象的交互,即实现人机交互,显示在触摸屏312上的用户界面对象可以是运行游戏的图标、联网到相应网络的图标等。在一实施例中,设备还可以包括光鼠,光鼠是不显示可视输出的触摸敏感表面,或者是由触摸屏形成的触摸敏感表面的延伸。 Display controller 3091 in I/O subsystem 309 receives an electrical signal from touch screen 312 or an electrical signal to touch screen 312. The touch screen 312 detects the contact on the touch screen, and the display controller 3091 converts the detected contact into an interaction with the user interface object displayed on the touch screen 312, that is, realizes human-computer interaction, and the user interface object displayed on the touch screen 312 can be operated. The icon of the game, the icon of the network to the corresponding network, and the like. In an embodiment, the device may also include a light mouse, which is a touch sensitive surface that does not display a visual output, or an extension of a touch sensitive surface formed by the touch screen.
RF电路305,主要设置为建立手机与无线网络(即网络侧)的通信,实现手机与无线网络的数据接收和发送。例如收发短信息、电子邮件等。在一实施例中,RF电路305接收并发送RF信号,RF信号也称为电磁信号,RF电路305将电信号转换为电磁信号或将电磁信号转换为电信号,并且通过该电磁信号与通信网络以及其他设备进行通信。RF电路305可以包括用于执行这些功能的已知电路,RF电路305包括但不限于天线系统、RF收发机、一个或多个放大器、调谐器、一个或多个振荡器、数字信号处理器、编译码器(COder-DECoder,CODEC)芯片组、用户标识模块(Subscriber Identity Module,SIM)等等。The RF circuit 305 is mainly configured to establish communication between the mobile phone and the wireless network (ie, the network side), and implement data reception and transmission between the mobile phone and the wireless network. For example, sending and receiving short messages, emails, and the like. In one embodiment, RF circuit 305 receives and transmits an RF signal, also referred to as an electromagnetic signal, and RF circuit 305 converts the electrical signal into an electromagnetic signal or converts the electromagnetic signal into an electrical signal, and through the electromagnetic signal and communication network And other devices to communicate. RF circuitry 305 may include known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, CODER-DECoder (CODEC) chipset, Subscriber Identity Module (SIM), etc.
音频电路306,主要设置为从外设接口303接收音频数据,将该音频数据转换为电信号,并且将该电信号发送给扬声器311。The audio circuit 306 is primarily configured to receive audio data from the peripheral interface 303, convert the audio data into an electrical signal, and transmit the electrical signal to the speaker 311.
扬声器311,设置为将手机通过RF电路305从无线网络接收的语音信号,还原为声音并向用户播放该声音。The speaker 311 is arranged to restore the voice signal received by the handset from the wireless network via the RF circuit 305 to sound and play the sound to the user.
电源管理芯片308,设置为为CPU302、I/O子系统及外设接口所连接的硬件进行供电及电源管理。The power management chip 308 is configured to provide power and power management for the hardware connected to the CPU 302, the I/O subsystem, and the peripheral interface.
本申请实施例提供的终端设备,可以优化用户的拍照操作。The terminal device provided by the embodiment of the present application can optimize the photographing operation of the user.
上述实施例中提供的拍照交互装置、存储介质及终端设备可执行本申请任意实施例所提供的拍照交互方法,具备执行该方法相应的功能模块和有益效果。未在上述实施例中描述的技术细节,可参见本申请任意实施例所提供的拍照交互方法。The photographing interaction device, the storage medium, and the terminal device provided in the foregoing embodiments may perform the photographing interaction method provided by any embodiment of the present application, and have the corresponding functional modules and beneficial effects of executing the method. For the technical details that are not described in the foregoing embodiments, refer to the photographing interaction method provided by any embodiment of the present application.

Claims (20)

  1. 一种拍照交互方法,包括:A photo interaction method includes:
    在识别摄像头拍摄到用户的成像部位的情况下,通过所述识别摄像头获取所述成像部位的姿态三维数据;Obtaining three-dimensional posture data of the imaging portion by the recognition camera in a case where the recognition camera captures an imaging portion of the user;
    通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息;Identifying the posture three-dimensional data by a preset imaging evaluation model to determine imaging posture evaluation information of the imaging portion;
    根据所述成像姿态评价信息确定对应的拍摄提示内容。Corresponding photographing prompt content is determined according to the imaging posture evaluation information.
  2. 如权利要求1所述的方法,其中,所述成像部位包括用户的头部;The method of claim 1 wherein said imaged portion comprises a user's head;
    通过所述识别摄像头获取所述成像部位的姿态三维数据包括:Obtaining the three-dimensional data of the posture of the imaging portion by the recognition camera includes:
    确定所述头部的姿态特征部位,并通过所述识别摄像头获取所述头部的姿态特征部位的姿态三维数据。Determining the posture feature portion of the head, and acquiring three-dimensional posture data of the posture feature portion of the head by the recognition camera.
  3. 如权利要求2所述的方法,其中,在识别摄像头拍摄到用户的成像部位的情况下,还包括:The method of claim 2, wherein, in the case that the recognition camera captures the imaged portion of the user, the method further comprises:
    获取所述头部的光照信息,并根据所述光照信息确定光源位置;Obtaining illumination information of the head, and determining a position of the light source according to the illumination information;
    根据所述成像姿态评价信息确定对应的拍摄提示内容包括:Determining the corresponding shooting prompt content according to the imaging posture evaluation information includes:
    根据所述成像姿态评价信息以及所述光源位置确定对应的拍摄提示内容。Corresponding photographing prompt content is determined according to the imaging posture evaluation information and the light source position.
  4. 如权利要求1所述的方法,其中,通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息包括:The method of claim 1, wherein the determining the three-dimensional data of the posture by the predetermined imaging evaluation model to determine the imaging posture evaluation information of the imaging portion comprises:
    通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像类别,以及根据所述成像类别确定所述成像部位的成像姿态评价信息。The posture three-dimensional data is identified by a preset imaging evaluation model to determine an imaging category of the imaging portion, and imaging posture evaluation information of the imaging portion is determined according to the imaging category.
  5. 如权利要求1所述的方法,其中,所述成像姿态评价信息包括所述成像部位与成像标准的误差值;The method of claim 1, wherein the imaging posture evaluation information comprises an error value of the imaging portion and an imaging standard;
    根据所述成像姿态评价信息确定对应的拍摄提示内容包括:Determining the corresponding shooting prompt content according to the imaging posture evaluation information includes:
    根据所述误差值确定拍摄提示内容;其中,所述拍摄提示内容用于提示用户进行相应的移动,以降低所述误差值。The shooting prompt content is determined according to the error value; wherein the shooting prompt content is used to prompt the user to perform corresponding movement to reduce the error value.
  6. 如权利要求1至5任一项所述的方法,其中,所述识别摄像头为三维摄像头;The method according to any one of claims 1 to 5, wherein the recognition camera is a three-dimensional camera;
    通过所述识别摄像头获取所述成像部位的姿态三维数据包括:Obtaining the three-dimensional data of the posture of the imaging portion by the recognition camera includes:
    通过所述识别摄像头获取所述成像部位的部位深度数据,以及部位红外数据;Obtaining part depth data of the imaged portion and the part infrared data by the recognition camera;
    根据所述部位深度数据确定所述成像部位的初始三维数据;Determining initial three-dimensional data of the imaged portion according to the part depth data;
    根据所述部位红外数据对所述初始三维数据进行校正,以得到所述成像部位的姿态三维数据。The initial three-dimensional data is corrected according to the part infrared data to obtain three-dimensional data of the posture of the imaging part.
  7. 如权利要求1至5任一项所述的方法,在通过预设的成像评价模型对所述姿态三维数据进行识别之前,还包括:The method according to any one of claims 1 to 5, before the gesture three-dimensional data is identified by a preset imaging evaluation model, further comprising:
    将预设样本数据输入至预设分类器中进行训练,得到成像评价模型;Inputting preset sample data into a preset classifier for training to obtain an imaging evaluation model;
    其中,所述成像评价模型用于根据拍摄到的成像部位的姿态三维数据确定对应的成像姿态评价信息;The imaging evaluation model is configured to determine corresponding imaging posture evaluation information according to the captured three-dimensional data of the imaged portion;
    所述预设样本数据包括成像部位的样本三维数据,以及对应的样本成像姿态评价信息。The preset sample data includes sample three-dimensional data of the imaging portion, and corresponding sample imaging posture evaluation information.
  8. 如权利要求1-7任一项所述的方法,其中,所述根据所述成像姿态评价信息确定对应的拍摄提示内容,包括:The method according to any one of claims 1 to 7, wherein the determining the corresponding shooting prompt content according to the imaging posture evaluation information comprises:
    根据预设映射表确定所述成像姿态评价信息对应的拍摄提示内容。The shooting prompt content corresponding to the imaging posture evaluation information is determined according to a preset mapping table.
  9. 如权利要求1-8任一项所述的方法,其中,所述拍摄提示内容包括以下之一:文字数据、图片数据、动画数据和声音数据。The method according to any one of claims 1 to 8, wherein the photographing hint content comprises one of: text data, picture data, animation data, and sound data.
  10. 一种拍照交互装置,包括:A photo interaction device includes:
    三维数据获取模块,设置为在识别摄像头拍摄到用户的成像部位的情况下,通过所述识别摄像头获取所述成像部位的姿态三维数据;a three-dimensional data acquisition module, configured to acquire, by the recognition camera, three-dimensional data of the posture of the imaging portion, in a case where the recognition camera captures an imaging portion of the user;
    评价确定模块,设置为通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像姿态评价信息;An evaluation determining module configured to identify the posture three-dimensional data by a preset imaging evaluation model to determine imaging posture evaluation information of the imaging portion;
    提示确定模块,设置为根据所述成像姿态评价信息确定对应的拍摄提示内容。The prompt determination module is configured to determine a corresponding shooting prompt content according to the imaging posture evaluation information.
  11. 如权利要求10所述的装置,其中,所述成像部位包括用户的头部;The device of claim 10 wherein said imaged portion comprises a user's head;
    所述三维数据获取模块是设置为通过如下方式通过所述识别摄像头获取所述成像部位的姿态三维数据:确定所述头部的姿态特征部位,并通过所述识别摄像头获取所述头部的姿态特征部位的姿态三维数据。The three-dimensional data acquisition module is configured to acquire three-dimensional posture three-dimensional data of the imaging portion by the recognition camera by: determining a posture feature portion of the head, and acquiring the posture of the head by the recognition camera Three-dimensional data of the posture of the feature part.
  12. 如权利要求11所述的装置,还包括:光源确定模块,设置为在识别摄像头拍摄到用户的成像部位的情况下,获取所述头部的光照信息,并根据所述光照信息确定光源位置;The apparatus according to claim 11, further comprising: a light source determining module configured to acquire illumination information of the head in a case where the recognition camera captures an imaging portion of the user, and determine a light source position according to the illumination information;
    提示确定模块是设置为:根据所述成像姿态评价信息以及所述光源位置确定对应的拍摄提示内容。The prompt determination module is configured to: determine corresponding photographing prompt content according to the imaging posture evaluation information and the light source position.
  13. 如权利要求10所述的装置,其中,评价确定模块是设置为:通过预设的成像评价模型对所述姿态三维数据进行识别,以确定所述成像部位的成像类别,以及根据所述成像类别确定所述成像部位的成像姿态评价信息。The apparatus according to claim 10, wherein the evaluation determining module is configured to: identify the three-dimensional image of the posture by a preset imaging evaluation model to determine an imaging category of the imaging portion, and according to the imaging category The imaging posture evaluation information of the imaging site is determined.
  14. 如权利要求10所述的装置,其中,所述成像姿态评价信息包括所述成像部位与成像标准的误差值;The apparatus according to claim 10, wherein said imaging posture evaluation information includes an error value of said imaging portion and an imaging standard;
    提示确定模块是设置为:根据所述误差值确定拍摄提示内容;其中,所述拍摄提示内容用于提示用户进行相应的移动,以降低所述误差值。The prompt determination module is configured to: determine the shooting prompt content according to the error value; wherein the shooting prompt content is used to prompt the user to perform corresponding movement to reduce the error value.
  15. 如权利要求10至14任一项所述的装置,其中,所述识别摄像头为三维摄像头;The apparatus according to any one of claims 10 to 14, wherein the recognition camera is a three-dimensional camera;
    所述三维数据获取模块是设置为:通过所述识别摄像头获取所述成像部位的部位深度数据,以及部位红外数据;根据所述部位深度数据确定所述成像部位的初始三维数据;根据所述部位红外数据对所述初始三维数据进行校正,以得到所述成像部位的姿态三维数据。The three-dimensional data acquisition module is configured to: acquire depth data of the portion of the imaged portion and the infrared data of the portion by using the identification camera; and determine initial three-dimensional data of the imaged portion according to the depth data of the portion; The infrared data corrects the initial three-dimensional data to obtain three-dimensional data of the posture of the imaging portion.
  16. 如权利要求10-14任一项所述的装置,还包括:训练模块,设置为在通过预设的成像评价模型对所述姿态三维数据进行识别之前,将预设样本数据输入至预设分类器中进行训练,得到成像评价模型;The apparatus according to any one of claims 10 to 14, further comprising: a training module configured to input the preset sample data to the preset classification before the gesture three-dimensional data is recognized by the preset imaging evaluation model Training in the device to obtain an imaging evaluation model;
    其中,所述成像评价模型用于根据拍摄到的成像部位的姿态三维数据确定对应的成像姿态评价信息;The imaging evaluation model is configured to determine corresponding imaging posture evaluation information according to the captured three-dimensional data of the imaged portion;
    所述预设样本数据包括成像部位的样本三维数据,以及对应的样本成像姿态评价信息。The preset sample data includes sample three-dimensional data of the imaging portion, and corresponding sample imaging posture evaluation information.
  17. 如权利要求10-16任一项所述的装置,其中,所述提示确定模块是设置为根据预设映射表确定所述成像姿态评价信息对应的拍摄提示内容。The apparatus according to any one of claims 10-16, wherein the prompt determination module is configured to determine a photographing prompt content corresponding to the imaging posture evaluation information according to a preset mapping table.
  18. 如权利要求10-17任一项所述的装置,其中,所述拍摄提示内容包括以下之一:文字数据、图片数据、动画数据和声音数据。The apparatus according to any one of claims 10-17, wherein the photographing hint content comprises one of: text data, picture data, animation data, and sound data.
  19. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-9中任一项所述的拍照交互方法。A computer readable storage medium storing a computer program, the computer program being executed by a processor to implement the photographing interaction method according to any one of claims 1-9.
  20. 一种终端设备,包括存储器,处理器及存储在存储器上并可在处理器运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1-9任一项所述的拍照交互方法。A terminal device comprising a memory, a processor, and a computer program stored on the memory and operable by the processor, the processor executing the computer program to implement the photographing interaction according to any one of claims 1-9 method.
PCT/CN2019/085459 2018-05-16 2019-05-05 Photographing interaction method and apparatus, storage medium and terminal device WO2019218879A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810469542.1A CN108921815A (en) 2018-05-16 2018-05-16 It takes pictures exchange method, device, storage medium and terminal device
CN201810469542.1 2018-05-16

Publications (1)

Publication Number Publication Date
WO2019218879A1 true WO2019218879A1 (en) 2019-11-21

Family

ID=64403768

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/085459 WO2019218879A1 (en) 2018-05-16 2019-05-05 Photographing interaction method and apparatus, storage medium and terminal device

Country Status (2)

Country Link
CN (1) CN108921815A (en)
WO (1) WO2019218879A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921815A (en) * 2018-05-16 2018-11-30 Oppo广东移动通信有限公司 It takes pictures exchange method, device, storage medium and terminal device
CN109600550B (en) * 2018-12-18 2022-05-31 维沃移动通信有限公司 Shooting prompting method and terminal equipment
CN114727002A (en) * 2021-01-05 2022-07-08 北京小米移动软件有限公司 Shooting method and device, terminal equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125396A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image shooting method and device
CN105205462A (en) * 2015-09-18 2015-12-30 北京百度网讯科技有限公司 Shooting promoting method and device
CN105307017A (en) * 2015-11-03 2016-02-03 Tcl集团股份有限公司 Method and device for correcting posture of smart television user
CN106484086A (en) * 2015-09-01 2017-03-08 北京三星通信技术研究有限公司 The method shooting for auxiliary and its capture apparatus
CN107479801A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Displaying method of terminal, device and terminal based on user's expression
CN108921815A (en) * 2018-05-16 2018-11-30 Oppo广东移动通信有限公司 It takes pictures exchange method, device, storage medium and terminal device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104914995B (en) * 2015-05-19 2017-10-17 广东欧珀移动通信有限公司 A kind of photographic method and terminal
CN106203254B (en) * 2016-06-23 2020-02-07 青岛海信移动通信技术股份有限公司 Method and device for adjusting photographing direction
CN106851094A (en) * 2016-12-30 2017-06-13 纳恩博(北京)科技有限公司 A kind of information processing method and device
CN107566529B (en) * 2017-10-18 2020-08-14 维沃移动通信有限公司 Photographing method, mobile terminal and cloud server
CN107580209B (en) * 2017-10-24 2020-04-21 维沃移动通信有限公司 Photographing imaging method and device of mobile terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125396A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image shooting method and device
CN106484086A (en) * 2015-09-01 2017-03-08 北京三星通信技术研究有限公司 The method shooting for auxiliary and its capture apparatus
CN105205462A (en) * 2015-09-18 2015-12-30 北京百度网讯科技有限公司 Shooting promoting method and device
CN105307017A (en) * 2015-11-03 2016-02-03 Tcl集团股份有限公司 Method and device for correcting posture of smart television user
CN107479801A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Displaying method of terminal, device and terminal based on user's expression
CN108921815A (en) * 2018-05-16 2018-11-30 Oppo广东移动通信有限公司 It takes pictures exchange method, device, storage medium and terminal device

Also Published As

Publication number Publication date
CN108921815A (en) 2018-11-30

Similar Documents

Publication Publication Date Title
WO2019218880A1 (en) Interaction recognition method and apparatus, storage medium, and terminal device
US11138434B2 (en) Electronic device for providing shooting mode based on virtual character and operation method thereof
US20220076000A1 (en) Image Processing Method And Apparatus
KR101979669B1 (en) Method for correcting user’s gaze direction in image, machine-readable storage medium and communication terminal
CN109348135A (en) Photographic method, device, storage medium and terminal device
US20220309836A1 (en) Ai-based face recognition method and apparatus, device, and medium
WO2019218879A1 (en) Photographing interaction method and apparatus, storage medium and terminal device
US11308692B2 (en) Method and device for processing image, and storage medium
CN111541907A (en) Article display method, apparatus, device and storage medium
CN108646920A (en) Identify exchange method, device, storage medium and terminal device
US11284020B2 (en) Apparatus and method for displaying graphic elements according to object
CN112614057A (en) Image blurring processing method and electronic equipment
US20200322530A1 (en) Electronic device and method for controlling camera using external electronic device
KR20190036168A (en) Method for correcting image based on category and recognition rate of objects included image and electronic device for the same
EP3641294A1 (en) Electronic device and method for obtaining images
US11509815B2 (en) Electronic device and method for processing image having human object and providing indicator indicating a ratio for the human object
CN108491780B (en) Image beautification processing method and device, storage medium and terminal equipment
CN107958223A (en) Face identification method and device, mobile equipment, computer-readable recording medium
CN107977636B (en) Face detection method and device, terminal and storage medium
CN113741681A (en) Image correction method and electronic equipment
US11144197B2 (en) Electronic device performing function according to gesture input and operation method thereof
CN108055461B (en) Self-photographing angle recommendation method and device, terminal equipment and storage medium
WO2019218878A1 (en) Photography restoration method and apparatus, storage medium and terminal device
CN112153300A (en) Multi-view camera exposure method, device, equipment and medium
CN116129526A (en) Method and device for controlling photographing, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19802845

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19802845

Country of ref document: EP

Kind code of ref document: A1