CN111801932A - Image shooting method and device - Google Patents

Image shooting method and device Download PDF

Info

Publication number
CN111801932A
CN111801932A CN201880090707.2A CN201880090707A CN111801932A CN 111801932 A CN111801932 A CN 111801932A CN 201880090707 A CN201880090707 A CN 201880090707A CN 111801932 A CN111801932 A CN 111801932A
Authority
CN
China
Prior art keywords
image
target person
information
target
characteristic parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880090707.2A
Other languages
Chinese (zh)
Inventor
孙新江
仇芳
张�雄
苗森
皮志明
李宗原
那柏林
曹飞祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN111801932A publication Critical patent/CN111801932A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses an image shooting method and device, relates to the technical field of communication, and solves the problem that in the prior art, when the photographic skills of passers-by or friends do not meet the user expectation, a photo meeting the personalized requirements of a user cannot be shot, so that the shooting efficiency of a terminal is low. The specific scheme is as follows: acquiring first body feature information of a target person in the first image based on convolutional neural network operation; generating a first characteristic parameter corresponding to the first image according to the first body characteristic information; if the first characteristic parameter is out of the range of the preset target characteristic parameter, acquiring first deviation information of the upper limit or the lower limit of the first characteristic parameter and the target characteristic parameter; generating first prompt information according to the first deviation information; the first prompt message is used for guiding the photographer to move the electronic equipment to photograph the target person.

Description

Image shooting method and device Technical Field
The present application relates to the field of communications technologies, and in particular, to an image capturing method and apparatus.
Background
A shooting component (e.g., a camera or the like) is generally integrated in an electronic device (e.g., a mobile phone, a tablet computer or the like) for quickly taking a picture through the electronic device. After the user opens the camera, the electronic equipment can display the shot picture captured by the camera in the view finding window in real time, and the user can select a proper shooting position and a proper shooting angle to shoot the shot picture in the view finding window.
In some photo scenes (e.g., a single person or multiple persons standing all over the body), a user may wish that others (e.g., passers-by or friends) be able to take a picture of himself with personalized features (e.g., a large long leg). However, the photography skills of passers-by or friends may not meet the user expectations, so that photos meeting the personalized requirements of the user cannot be taken, and the photographing efficiency of the terminal is correspondingly reduced.
Disclosure of Invention
The embodiment of the application provides an image shooting method and device, which can endow artificial intelligence to electronic equipment, so that the electronic equipment can provide a shooting scheme for shooting personalized photos based on a deep learning technology, and guides a photographer to shoot the photos meeting personalized requirements of a user.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect of embodiments of the present application, an image capturing method is provided, which is applied to an electronic device having an image capturing function, and includes: acquiring first body feature information of a target person in the first image based on convolutional neural network operation; generating a first characteristic parameter corresponding to the first image according to the first body characteristic information, wherein the first characteristic parameter is used for identifying the position information of the image corresponding to the target person in the first image and the pitch angle information when the electronic equipment shoots the target person; if the first characteristic parameter is out of the range of the preset target characteristic parameter, acquiring first deviation information of the first characteristic parameter and the upper limit or the lower limit of the target characteristic parameter; generating first prompt information according to the first deviation information; the first prompt message is used for guiding the photographer to move the electronic equipment to photograph the target person. Based on the scheme, when the first characteristic parameter is not in the preset target characteristic parameter range, the deviation information of the upper limit or the lower limit of the first characteristic parameter and the target characteristic parameter can be obtained, and prompt information is generated to guide a photographer to take a photo meeting the personalized demand of the user.
With reference to the first aspect, in a possible implementation manner, before the step of obtaining the first body feature information of the target person in the first image based on a convolutional neural network operation, the method further includes: performing face recognition on the target person based on convolutional neural network operation; if the face of the target person is recognized, acquiring the first body feature information; and if the face of the target person is not recognized, terminating. Based on the scheme, the convolutional neural network is adopted to operate and recognize the face, the accuracy rate can be greatly improved, the first body characteristic information is obtained under the condition that the face of the target person is recognized, and the shooting efficiency can be improved.
With reference to the first aspect and the possible implementation manners, in another possible implementation manner, the first body feature information is a plurality of key points of the target person; before the step of generating a first feature parameter corresponding to the first image according to the first body feature information, the method further includes: determining the reasonability of a human body frame formed by connecting the plurality of key points based on the operation of a convolutional neural network; if the human body frame of the target person is reasonable, generating a first characteristic parameter corresponding to the first image according to the first body characteristic information; and if the human body frame of the target person is not reasonable, terminating. Based on the scheme, the first characteristic parameter can be generated according to the first body characteristic information under the condition that the human body frame of the target person is reasonable, and the shooting efficiency is improved.
With reference to the first aspect and the foregoing possible implementation manner, in another possible implementation manner, the first feature parameter is generated when the pose of the target person in the first image satisfies a preset condition. Based on the scheme, the first characteristic parameter corresponding to the first image can be generated under the condition that the pose of the target person in the first image meets the preset condition.
With reference to the first aspect and the foregoing possible implementation manners, in another possible implementation manner, the preset condition includes: the target person is a whole body and is in a standing posture. According to the scheme, when the target person in the first image is in the full-body standing posture, the first characteristic parameter corresponding to the first image can be generated.
With reference to the first aspect and the foregoing possible implementation manners, in another possible implementation manner, the method further includes: acquiring second body characteristic information of the target person in a second image based on convolutional neural network operation; the second image is the image of the target person shot after the electronic equipment is adjusted according to the first prompt information; generating a second characteristic parameter corresponding to the second image according to the second body characteristic information; if the second characteristic parameter is out of the range of the preset target characteristic parameter, second deviation information of the upper limit or the lower limit of the second characteristic parameter and the target characteristic parameter is obtained; generating second prompt information according to the second deviation information; the second prompt message is used for guiding the photographer to move the electronic equipment to photograph the target person. Based on the scheme, after the photographer moves the electronic equipment according to the first prompt information, the pose of the electronic equipment can be further adjusted according to the second characteristic parameter, so that the image meeting the personalized requirements of the user can be obtained.
With reference to the first aspect and the possible implementation manner, in another possible implementation manner, if the number of the target persons in the first image is multiple, the generating a first feature parameter corresponding to the first image according to the first body feature information includes: if the position of the target person meets the preset condition, the ratio of the persons is greater than or equal to a preset ratio; a first feature parameter corresponding to the first image is generated according to the first body feature information. Based on the scheme, when the pose of the person meeting the preset proportion in the target persons meets the preset condition, the first characteristic parameter corresponding to the first image can be generated.
With reference to the first aspect and the foregoing possible implementation manner, in another possible implementation manner, the first body feature information includes a left ankle key point and a right ankle key point of the target person in the first image, and the first feature parameter includes an ankle position y of the target person in a preset coordinate system in the first imageankleAnd a photographing pitch angle β of the electronic device, wherein the target characteristic parameter includes a reference ankle position y of the target person in the predetermined coordinate system0And a reference photographing pitch angle β of the above electronic device0The first deviation information includes: the y-axis direction deviation Δ y and the shooting pitch angle deviation Δ β, and accordingly, the acquiring of the first deviation information of the upper limit or the lower limit of the first characteristic parameter and the target characteristic parameter includes: according to the reference ankle position y of the target person0And the ankle position y of the target personankleCalculating a difference value to obtain the y-axis direction deviation delta y; shooting a pitch angle beta according to the reference0And the difference between the upper limit or the lower limit of the angle of elevation and the shooting angle of elevation beta is obtained to obtain the shooting angle of elevation deviation delta beta. Based on this scheme, can acquire y axle direction deviation and shoot the angle of pitch deviation.
Combining the first aspect andin another possible implementation manner, the first body feature information further includes an overhead key point of the target person in the first image, and the first feature parameter further includes a horizontal position x of the target person in the first image in a preset coordinate systemheadAnd a height h of the character imagesThe target characteristic parameter further includes a reference horizontal position x of the target person in the predetermined coordinate system0And a reference character image height hs0The first deviation information further includes: the x-axis direction deviation Δ x and the distance deviation Δ z, and accordingly, the obtaining of the first deviation information of the upper limit or the lower limit of the first characteristic parameter and the target characteristic parameter includes: according to the reference horizontal position x of the target person0Horizontal position x with the above-mentioned target personheadCalculating a difference value to obtain the deviation delta x in the x-axis direction; according to the reference person image height h of the target persons0And the height h of the person image of the target personsAnd calculating a difference value to obtain the distance deviation Delta z. Based on the scheme, the deviation of the x-axis direction and the distance deviation can be obtained.
With reference to the first aspect and the foregoing possible implementation manners, in another possible implementation manner, if the number of the target persons is one, the y isankleThe ordinate of the lower point of the key point of the left ankle and the key point of the right ankle of the target person is shown; the beta is a shooting pitch angle of the electronic equipment for shooting the first image; if the number of the target persons is plural, y isankleA vertical coordinate of a lowest point among the left ankle key points and the right ankle key points of the plurality of target characters; the beta is a shooting pitch angle of the electronic equipment for shooting the first image. Based on this scheme, can be when the quantity of target personage is one and a plurality of, obtain target personage's ankle position and electronic equipment and shoot the angle of pitch.
With reference to the first aspect and the foregoing possible implementation manners, in another possible implementation manner, if the number of the target persons is one, the x isheadIs the top of the head of the target personThe abscissa of the key point; h abovesIs above yankleThe difference value of the head top key point vertical coordinate of the target person; if the number of the target persons is plural, the xheadAveraging the abscissa of the key points of the top of the head of at least two of the plurality of target persons; h abovesIs above yankleAnd the difference value with the ordinate of the highest point in the key points of the top of the head of the plurality of target characters. Based on the scheme, the horizontal position and the figure image height of the target figure can be acquired when the number of the target figures is one or more.
With reference to the first aspect and the possible implementation manners, in another possible implementation manner, the first prompt information includes: movement instruction information and/or rotation instruction information, the movement instruction information being used for instructing a photographer to move the electronic device; the rotation instruction information is used for instructing a photographer to rotate the electronic device. Based on the scheme, the electronic equipment can be indicated to move by a plurality of dimensions in indication information such as a pitch angle and the like of the electronic equipment, wherein the indication information comprises a left direction, a right direction, an upward direction, a downward direction, an upward left direction, a downward left direction, an upward right direction, a downward right direction, a far away from a shot object, a close to the shot object and a rotation of the electronic equipment.
With reference to the first aspect and the foregoing possible implementation manners, in another possible implementation manner, if the number of the target persons is one, the method further includes: acquiring a human face yaw angle of the target person; if the human face yaw angle is within the range of the preset angle interval, the reference horizontal position x0Is a first preset threshold value; if the human face yaw angle is smaller than the minimum value of the preset angle interval range, the reference horizontal position x0Is a second preset threshold value, and the second preset threshold value is greater than the first preset threshold value; if the human face yaw angle is larger than the maximum value of the preset angle interval range, the reference horizontal position x0Is a third preset threshold, which is smaller than the first preset threshold; or, if the human face yaw angle is smaller than the minimum value of the preset angle interval range, the reference horizontal position x0The third preset threshold value is set; if the above-mentionedThe human face yaw angle is larger than the maximum value of the range of the preset angle interval, and the reference horizontal position x0The second preset threshold value. Based on the scheme, when the number of the target persons is one, the reference horizontal position can be determined according to the human face yaw angle of the target persons, so that enough space is left in the direction of the human face in the image, and the suppression feeling is avoided.
With reference to the first aspect and the foregoing possible implementation manners, in another possible implementation manner, the generating the first prompt information according to the first deviation information includes: acquiring state information of the electronic equipment through a sensor; the first presentation information is generated based on the state information and the first deviation information. The sensor includes: at least one of a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, a gravitational acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, or a bone conduction sensor. Based on this scheme, can utilize the status information that the sensor acquireed, rectify first deviation information to make first tip information more accurate.
In a second aspect of the embodiments of the present application, an image capturing apparatus is provided, where the image capturing apparatus may be an electronic device with a camera function, and may also be a chip set in the electronic device with the camera function, the apparatus includes an operation array and a central processing unit CPU, where the operation array is configured to obtain first body feature information of a target person in a first image based on a convolutional neural network operation; the CPU is used for generating a first characteristic parameter corresponding to the first image according to the first body characteristic information, wherein the first characteristic parameter is used for identifying the position information of the image corresponding to the target person in the first image and the pitch angle information when the electronic equipment shoots the target person; if the first characteristic parameter is out of the range of the preset target characteristic parameter, acquiring first deviation information of the first characteristic parameter and the upper limit or the lower limit of the target characteristic parameter; generating first prompt information according to the first deviation information; the first prompt message is used for guiding the photographer to move the electronic equipment to photograph the target person.
With reference to the second aspect, in a possible implementation manner, the operation array is further configured to perform face recognition on the target person based on a convolutional neural network operation; if the face of the target person is recognized, acquiring the first body feature information; and if the face of the target person is not recognized, terminating.
With reference to the second aspect and the possible implementation manners, in another possible implementation manner, the first body feature information is a plurality of key points of the target person; the operation array is also used for determining the reasonability of a human body frame formed by connecting the plurality of key points based on convolution neural network operation; if the human body frame of the target person is reasonable, generating a first characteristic parameter corresponding to the first image according to the first body characteristic information; and if the human body frame of the target person is not reasonable, terminating.
With reference to the second aspect and the foregoing possible implementation manner, in another possible implementation manner, the first feature parameter is generated when the pose of the target person in the first image satisfies a preset condition.
With reference to the second aspect and the foregoing possible implementation manners, in another possible implementation manner, the preset condition includes: the target person is a whole body and is in a standing posture.
With reference to the second aspect and the foregoing possible implementation manner, in another possible implementation manner, the operation array is further configured to obtain second body feature information of the target person in a second image based on a convolutional neural network operation; the second image is the image of the target person shot after the electronic equipment is adjusted according to the first prompt information; the CPU is further used for generating a second characteristic parameter corresponding to the second image according to the second body characteristic information; if the second characteristic parameter is out of the range of the preset target characteristic parameter, second deviation information of the upper limit or the lower limit of the second characteristic parameter and the target characteristic parameter is obtained; generating second prompt information according to the second deviation information; the second prompt message is used for guiding the photographer to move the electronic equipment to photograph the target person.
With reference to the second aspect and the foregoing possible implementation manners, in another possible implementation manner, the CPU is specifically configured to, if the position of the target person meets the preset condition, determine that the ratio of persons occupying the position meets the preset condition is greater than or equal to a preset ratio; a first feature parameter corresponding to the first image is generated according to the first body feature information.
With reference to the second aspect and the foregoing possible implementation manner, in another possible implementation manner, the first body feature information includes a left ankle key point and a right ankle key point of the target person in the first image, and the first feature parameter includes an ankle position y of the target person in a preset coordinate system in the first imageankleAnd a photographing pitch angle β of the electronic device, wherein the target characteristic parameter includes a reference ankle position y of the target person in the predetermined coordinate system0And a reference photographing pitch angle β of the above electronic device0The first deviation information includes: a deviation Delta y of the y-axis direction and a deviation Delta beta of the shooting pitch angle, and the CPU is specifically used for referencing the ankle position y of the target person0And the ankle position y of the target personankleCalculating a difference value to obtain the deviation delta y in the y-axis direction; shooting a pitch angle beta according to the reference0And the difference value is obtained between the upper limit or the lower limit of the pitch angle and the shooting pitch angle beta to obtain the shooting pitch angle deviation delta beta.
With reference to the second aspect and the foregoing possible implementation manner, in another possible implementation manner, the first body feature information further includes an overhead key point of the target person in the first image, and the first feature parameter further includes a horizontal position x of the target person in the first image in a preset coordinate systemheadAnd a height h of the character imagesThe target characteristic parameter further includes a reference horizontal position x of the target person in the predetermined coordinate system0And a reference character image height hs0The first deviation information further includes: an x-axis direction deviation Deltax and a distance deviation Deltaz, and the CPU is specifically configured to determine the reference horizontal position x of the target person0Horizontal position x with the above-mentioned target personheadCalculating a difference value to obtain the deviation delta x in the x-axis direction; according to the reference person image height h of the target persons0And the height h of the person image of the target personsAnd calculating a difference value to obtain the distance deviation Delta z.
With reference to the second aspect and the foregoing possible implementation manners, in another possible implementation manner, if the number of the target persons is one, the y isankleThe ordinate of the lower point of the key point of the left ankle and the key point of the right ankle of the target person is shown; the beta is a shooting pitch angle of the electronic equipment for shooting the first image; if the number of the target persons is plural, y isankleA vertical coordinate of a lowest point among the left ankle key points and the right ankle key points of the plurality of target characters; the beta is a shooting pitch angle of the electronic equipment for shooting the first image.
With reference to the second aspect and the foregoing possible implementation manners, in another possible implementation manner, if the number of the target persons is one, the x isheadThe abscissa of the key point of the top of the head of the target person is taken as the abscissa; h abovesIs above yankleThe difference value of the head top key point vertical coordinate of the target person; if the number of the target persons is plural, the xheadAveraging the abscissa of the key points of the top of the head of at least two of the plurality of target persons; h abovesIs above yankleAnd the difference value with the ordinate of the highest point in the key points of the top of the head of the plurality of target characters.
With reference to the second aspect and the foregoing possible implementation manner, in another possible implementation manner, the first prompt information includes: movement instruction information and/or rotation instruction information, the movement instruction information being used for instructing a photographer to move the electronic device; the rotation instruction information is used for instructing a photographer to rotate the electronic device.
With reference to the second aspect and the foregoing possible implementation manner, in another possible implementation manner, the operation array is further configured to obtain a human face yaw angle of the target person if the number of the target persons is one; if the human face yaw angle is within the range of the preset angle interval, the reference horizontal position x0Is a first preset threshold value; if the human face yaw angle is smaller than the minimum value of the preset angle interval range, the reference horizontal position x0Is a second preset threshold value, and the second preset threshold value is greater than the first preset threshold value; if the human face yaw angle is larger than the maximum value of the range of the preset angle interval, the reference horizontal position x0Is a third preset threshold, which is smaller than the first preset threshold; or, if the human face yaw angle is smaller than the minimum value of the preset angle interval range, the reference horizontal position x0The third preset threshold value is set; if the human face yaw angle is larger than the maximum value of the range of the preset angle interval, the reference horizontal position x0The second preset threshold value.
With reference to the second aspect and the foregoing possible implementation manner, in another possible implementation manner, the CPU is specifically configured to acquire state information of the electronic device through a sensor; the first presentation information is generated based on the state information and the first deviation information. The sensor includes: at least one of a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, a gravitational acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, or a bone conduction sensor.
In a third aspect of the embodiments of the present application, there is provided an image capturing apparatus applied to an electronic device having a camera, the apparatus including: the acquiring unit is used for acquiring first body characteristic information of a target person in the first image based on convolutional neural network operation; a processing unit, configured to generate a first feature parameter corresponding to the first image according to the first body feature information, where the first feature parameter is used to identify position information of an image corresponding to the target person in the first image and pitch angle information of the target person when the electronic device captures the image; if the first characteristic parameter is out of the range of the preset target characteristic parameter, acquiring first deviation information of the first characteristic parameter and the upper limit or the lower limit of the target characteristic parameter; generating first prompt information according to the first deviation information; the first prompt message is used for guiding the photographer to move the electronic equipment to photograph the target person.
With reference to the third aspect, in a possible implementation manner, the processing unit is further configured to perform face recognition on the target person based on a convolutional neural network operation; if the face of the target person is recognized, acquiring the first body feature information; and if the face of the target person is not recognized, terminating.
With reference to the third aspect and the possible implementation manners, in another possible implementation manner, the first body feature information is a plurality of key points of the target person; the processing unit is further configured to determine the reasonability of a human body frame formed by connecting the plurality of key points based on convolutional neural network operation; if the human body frame of the target person is reasonable, generating a first characteristic parameter corresponding to the first image according to the first body characteristic information; and if the human body frame of the target person is not reasonable, terminating.
With reference to the third aspect and the foregoing possible implementation manner, in another possible implementation manner, the first feature parameter is generated when the pose of the target person in the first image satisfies a preset condition.
With reference to the third aspect and the foregoing possible implementation manners, in another possible implementation manner, the preset condition includes: the target person is a whole body and is in a standing posture.
With reference to the third aspect and the foregoing possible implementation manner, in another possible implementation manner, the obtaining unit is further configured to obtain second body feature information of the target person in a second image based on a convolutional neural network operation; the second image is the image of the target person shot after the electronic equipment is adjusted according to the first prompt information; the processing unit is further configured to generate a second feature parameter corresponding to the second image according to the second body feature information; if the second characteristic parameter is out of the range of the preset target characteristic parameter, second deviation information of the upper limit or the lower limit of the second characteristic parameter and the target characteristic parameter is obtained; generating second prompt information according to the second deviation information; the second prompt message is used for guiding the photographer to move the electronic equipment to photograph the target person.
With reference to the third aspect and the foregoing possible implementation manners, in another possible implementation manner, the processing unit is specifically configured to, if a ratio of people whose postures meet the preset condition in the target person is greater than or equal to a preset ratio; a first feature parameter corresponding to the first image is generated according to the first body feature information.
With reference to the third aspect and the foregoing possible implementation manner, in another possible implementation manner, the first body feature information includes a left ankle key point and a right ankle key point of the target person in the first image, and the first feature parameter includes an ankle position y of the target person in a preset coordinate system in the first imageankleAnd a photographing pitch angle β of the electronic device, wherein the target characteristic parameter includes a reference ankle position y of the target person in the predetermined coordinate system0And a reference photographing pitch angle β of the above electronic device0The first deviation information includes: a deviation Delta y of the y-axis direction and a deviation Delta beta of the shooting pitch angle, and the processing unit is specifically used for processing the reference ankle position y of the target person0And the ankle position y of the target personankleCalculating a difference value to obtain the deviation delta y in the y-axis direction; shooting a pitch angle beta according to the reference0And the difference value is obtained between the upper limit or the lower limit of the pitch angle and the shooting pitch angle beta to obtain the shooting pitch angle deviation delta beta.
With reference to the third aspect and the foregoing possible implementation manners, in another possible implementation manner, the foregoing first aspectThe body characteristic information further includes key points of the top of the head of the target person in the first image, and the first characteristic parameter further includes a horizontal position x of the target person in the first image in a preset coordinate systemheadAnd a height h of the character imagesThe target characteristic parameter further includes a reference horizontal position x of the target person in the predetermined coordinate system0And a reference character image height hs0The first deviation information further includes: x-axis direction deviation Deltax and distance deviation Deltaz, and the processing unit is specifically used for processing the reference horizontal position x of the target person0Horizontal position x with the above-mentioned target personheadCalculating a difference value to obtain the deviation delta x in the x-axis direction; according to the reference person image height h of the target persons0And the height h of the person image of the target personsAnd calculating a difference value to obtain the distance deviation Delta z.
With reference to the third aspect and the foregoing possible implementation manners, in another possible implementation manner, if the number of the target persons is one, the y isankleThe ordinate of the lower point of the key point of the left ankle and the key point of the right ankle of the target person is shown; the beta is a shooting pitch angle of the electronic equipment for shooting the first image; if the number of the target persons is plural, y isankleA vertical coordinate of a lowest point among the left ankle key points and the right ankle key points of the plurality of target characters; the beta is a shooting pitch angle of the electronic equipment for shooting the first image.
With reference to the third aspect and the foregoing possible implementation manners, in another possible implementation manner, if the number of the target persons is one, the x isheadThe abscissa of the key point of the top of the head of the target person is taken as the abscissa; h abovesIs above yankleThe difference value of the head top key point vertical coordinate of the target person; if the number of the target persons is plural, the xheadAveraging the abscissa of the key points of the top of the head of at least two of the plurality of target persons; h abovesIs above yankleWith a plurality of target personsDifference in ordinate of highest point among vertex key points.
With reference to the third aspect and the foregoing possible implementation manners, in another possible implementation manner, the first prompt information includes: movement instruction information and/or rotation instruction information, the movement instruction information being used for instructing a photographer to move the electronic device; the rotation instruction information is used for instructing a photographer to rotate the electronic device.
With reference to the third aspect and the foregoing possible implementation manners, in another possible implementation manner, the obtaining unit is further configured to obtain a face yaw angle of the target person if the number of the target persons is one; if the human face yaw angle is within the range of the preset angle interval, the reference horizontal position x0Is a first preset threshold value; if the human face yaw angle is smaller than the minimum value of the preset angle interval range, the reference horizontal position x0Is a second preset threshold value, and the second preset threshold value is greater than the first preset threshold value; if the human face yaw angle is larger than the maximum value of the range of the preset angle interval, the reference horizontal position x0Is a third preset threshold, which is smaller than the first preset threshold; or, if the human face yaw angle is smaller than the minimum value of the preset angle interval range, the reference horizontal position x0The third preset threshold value is set; if the human face yaw angle is larger than the maximum value of the range of the preset angle interval, the reference horizontal position x0The second preset threshold value.
With reference to the third aspect and the foregoing possible implementation manners, in another possible implementation manner, the processing unit is specifically configured to acquire, by a sensor, state information of the electronic device; the first presentation information is generated based on the state information and the first deviation information. The sensor includes: at least one of a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, a gravitational acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, or a bone conduction sensor.
The descriptions of the effects of the second aspect and the various implementations of the second aspect, and the descriptions of the effects of the third aspect and the various implementations of the third aspect may refer to the descriptions of the corresponding effects of the first aspect, and are not repeated herein.
In a fourth aspect of the embodiments of the present application, an image capturing apparatus is provided, where the image capturing apparatus includes a processor, and optionally, the image capturing apparatus may further include a memory, and the processor may be coupled to the memory, may read instructions in the memory, and execute the image capturing method according to the instructions, where the method is described in any one of the above-described possible implementation manners of the first aspect or the first aspect.
A fifth aspect of the embodiments of the present application provides a computer storage medium, where a computer program code is stored, and when the computer program code runs on a processor, the processor is caused to execute the image capturing method according to the first aspect or any one of the possible implementation manners of the first aspect.
In a sixth aspect of the embodiments of the present application, a computer program product is provided, where the computer program product stores computer software instructions executed by the processor, and the computer software instructions include a program for executing the solution of the above aspect.
In a seventh aspect of the embodiments of the present application, there is provided an apparatus in the form of at least one chip, the apparatus includes a processor and a memory, the memory is configured to be coupled with the processor and stores necessary program instructions and data of the apparatus, and the processor is configured to execute the program instructions stored in the memory, so that the apparatus performs the functions of the image capturing apparatus in the above method.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a flowchart of an image capturing method according to an embodiment of the present disclosure;
fig. 3 is a first schematic view of an application scenario of an image capturing method according to an embodiment of the present application;
fig. 4 is a schematic view of an application scenario of an image capturing method according to an embodiment of the present application;
fig. 5 is a schematic view of an application scenario of an image capturing method according to an embodiment of the present application;
fig. 6 is a schematic diagram of a preset coordinate system in an image capturing method according to an embodiment of the present disclosure;
fig. 7 is a schematic view of an application scene of an image capturing method according to an embodiment of the present application;
fig. 8 is a schematic view of an application scenario of an image capturing method according to an embodiment of the present application;
FIG. 9 is a flowchart of another image capturing method provided in the embodiments of the present application;
FIG. 10 is a flowchart of another image capturing method provided in the embodiments of the present application;
fig. 11 is a schematic view six of an application scenario of an image capturing method according to an embodiment of the present application;
fig. 12 is a schematic composition diagram of an image capturing apparatus according to an embodiment of the present disclosure;
fig. 13 is a schematic composition diagram of another image capturing apparatus according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides an image shooting method, which is applied to an electronic device with a shooting function, and the electronic device can be a mobile phone, a tablet Computer, a desktop Computer, a laptop Computer, a notebook Computer, an Ultra-mobile Personal Computer (UMPC), a handheld Computer, a netbook, a Personal Digital Assistant (PDA) and other devices with a shooting function. The specific form of the electronic device in the embodiment of the present application is not particularly limited.
Fig. 1 is a schematic diagram of a hardware architecture of an electronic device 100 according to an embodiment of the present disclosure. As shown in fig. 1, the electronic device 100 includes: a shooting Unit 101, an Image Signal Processor (ISP) 102, an arithmetic array 103, a Central Processing Unit (CPU) 104, a sensor Unit 105, a display Unit 106, a radio frequency Unit 107, a memory 108, a data Processor 109, an audio Processor 110, and a key input Unit 111.
The camera unit 101, which may also be referred to as a camera, is used to capture images or video. In the shooting process, the reflected light of the scenery can generate an optical image after passing through the lens, the optical image is projected onto the photosensitive element, the photosensitive element converts the received optical Signal into an electric Signal, and then the shooting unit 101 sends the obtained electric Signal to a Digital Signal Processor (DSP) for Digital Signal Processing, and finally a Digital image is obtained. The digital image may be output on the electronic device 100 through the display unit 106, or may be stored in the memory 108.
The ISP102 is an arithmetic processing unit in the photographing process, and is configured to perform arithmetic processing on the data input by the photographing unit 101 to obtain a structure after linear correction, noise elimination, dead pixel repair, color interpolation, correction, exposure correction, and the like. The ISP chip can largely determine the final imaging quality of the electronic device. The arithmetic processing may include processing of functions such as Automatic Exposure Control (AEC), Automatic Gain Control (AGC), Automatic White Balance (AWB), color correction, Gamma correction, dead pixel removal, Auto Black Level, and Auto White Level. Illustratively, when a photo is taken, light is transmitted to a camera photosensitive element through a lens, an optical signal is converted into an electrical signal, the camera photosensitive element transmits the electrical signal to an ISP (Internet service provider) for processing, and the electrical signal is converted into an image visible to the naked eye, and the ISP can perform algorithm optimization on noise, brightness and skin color of the image.
The operation array 103, where the operation array 103 may be a Neural-Network Processing Unit (NPU), or other devices with a deep learning function, for convenience of description, the operation array 103 will be described below as an NPU. The operation array 103 is used for performing neural network analysis processing on the image, and rapidly processing input information by using biological neural network structures, for example, by using a transfer mode between neurons of a human brain, and can also continuously perform self-learning. Applications such as intelligent recognition of the electronic device 100 can be realized through the operation array 103, for example: image recognition, face recognition, speech recognition, text understanding, and the like. For example, in the embodiment of the present application, the operation array 103 may obtain information such as body feature information of a target person in a captured image and a face yaw angle of the target person in the image based on a human body posture estimation algorithm of a deep learning technique, where the face yaw angle refers to an angle at which a face of a person deviates from a straight front direction, for example, a yaw angle at which the face of a person faces to a left side or a right side relative to the straight front direction.
The CPU104 may also be referred to as an Application Processor (AP), and may be an independent device, or may be integrated with a modem Processor, a Graphics Processing Unit (GPU), a controller, a memory, a video codec, a digital signal Processor, a baseband Processor, and other units in a System On Chip (SoC).
The sensor unit 105 may include a plurality of sensors such as a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, a gravitational acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, and a bone conduction sensor. Among them, the acceleration sensor may detect a pitch angle photographed by the electronic apparatus 100.
And a display unit 106 for implementing a display function. For example, the electronic device 100 may implement a display function through a GPU, a display unit, an application processor, and the like.
The radio frequency unit 107 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The rf unit 107 may include at least one filter, a switch, a power Amplifier, a Low Noise Amplifier (LNA), and the like. The rf unit 107 may receive electromagnetic waves from the antenna, filter, amplify, etc. the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The rf unit 107 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna to radiate the electromagnetic wave.
Memory 108 may be used to store computer-executable program code, including instructions. The processor 104 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the memory 108. The memory 108 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as image data) created during use of the electronic device 100, and the like. In addition, the memory 108 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk Storage device, a Flash memory device, a Universal Flash Storage (UFS), and the like.
The data processor 109, which may also be referred to as a communication processor, is responsible for basic wireless communication of the electronic device 100, and performs Modem, channel coding and decoding, and wireless Modem control on the voice signal and the digital voice signal. Illustratively, the data processor 109 may be a baseband processor.
An audio processor 110 for converting digital audio information into an analog audio signal output and also for converting an analog audio input into a digital audio signal. The audio processor 110 may also be used to encode and decode audio signals. In some embodiments, the audio processor 110 may be disposed in the processor 104, or some functional modules of the audio processor 110 may be disposed in the processor 104.
The key input unit 111 includes a power-on key, a volume key, a home key, and the like. The keys can be mechanical keys or touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
It is understood that fig. 1 is merely exemplary, and in practice, the electronic device 100 may include more or fewer components than those shown in fig. 1, or some components may be combined, some components may be separated, or different arrangements of components may be used. The structure shown in fig. 1 should not limit the hardware architecture of the electronic device provided in the embodiment of the present application.
In order to solve the problem that in the background art, when the photography skills of passers-by or friends do not meet the user expectations, a photo meeting the personalized requirements of the user cannot be taken, and the photographing efficiency of the electronic equipment is low, the embodiment of the application provides an image photographing method.
With reference to fig. 1 and as shown in fig. 2, an embodiment of the present application provides an image processing method, which may include the following steps:
s201, acquiring first body feature information of the target person in the first image based on convolutional neural network operation.
It can be understood that the NPU is used as a special chip, is very suitable for deep learning based on a convolutional neural network, has high processing efficiency, and has higher energy efficiency ratio compared with general processors such as a CPU (central processing unit), a GPU (graphics processing unit) and the like. However, general-purpose processors such as CPUs and GPUs can also implement deep learning based on convolutional neural networks, but the processing efficiency is somewhat low. Therefore, in the present embodiment, step S201 may be executed by an NPU (operation array 103 shown in fig. 1), or step S201 may also be executed by the CPU104 or GPU shown in fig. 1, or the like. In the embodiment of the present application, the specific form of the operation array 103 is not limited, and the operation array 103 is merely described as an NPU as an example.
For example, the first image may be an image obtained by a camera of the electronic device when the electronic device is in a first pose, where the first pose refers to a position and a placing manner of the electronic device when the electronic device takes the first image, and if any one of the position and the placing manner of the electronic device changes, the pose of the electronic device changes. The position of the electronic device when the first image is captured includes a position of the electronic device in space, and the placing mode of the electronic device may indicate an orientation, an inclination angle, and the like of the electronic device.
For example, the target person in the first image refers to a person with a large face size in the first image and a ratio of the face size of the target person to the face size of the other person exceeding a certain threshold. When the first image includes a plurality of character images, if at least one of the character images is an image of a background character (e.g., other tourists in a tourist attraction), a character having a larger face size and a ratio to the other face size exceeding a certain threshold may be determined as a target character according to the face size and/or face distance of each of the plurality of character images, and a character image having a ratio to the face size of the target character smaller than a certain threshold, or a character image having a ratio to the face size of the target character smaller than a certain threshold and having a face distance larger than a certain threshold may be determined as the background character image. For example, as shown in fig. 3, A, B, C, D, E, F6 people are included in the first image, and it is determined that a person a with a large face size and a large difference from the other person's face size is a target person in the first image, and B, C, D, E, F is a background person.
For example, as shown in fig. 4 (a), the number of target persons in the first image may be one, that is, the first image may be a single person image, and as shown in fig. 4 (b), the number of target persons in the first image may also be multiple, that is, the first image may be a multiple person image.
Illustratively, the physical characteristic information is used for identifying the physical characteristics of the target person. For example, the first body feature information may include a plurality of key points (or joint points) of the target person, for example, the first body feature information may include key point information of a plurality of body parts, such as a vertex key point, a neck key point, a left shoulder key point, a right shoulder key point, a left elbow key point, a right elbow key point, a left wrist key point, a right wrist key point, a left hip key point, a right hip key point, a left knee key point, a right knee key point, a left ankle key point, a right ankle key point, and the like, of the target person. The specific content included in the body characteristic information in the embodiments of the present application is not limited, and is only an exemplary description here. As shown in fig. 5, the first body feature information of the target person in the first image is obtained by the convolutional neural network operation.
For example, after acquiring the first body feature information of the target person in the first image, the embodiment of the present application may further include: and determining the reasonability of a human body frame formed by connecting a plurality of key points based on the operation of the convolutional neural network.
Illustratively, the determining the reasonableness of the human body frame may include: after acquiring a plurality of key points of a target person in a first image based on convolutional neural network operation, acquiring a human body frame of the target person by connecting the plurality of key points of the target person, and judging whether the human body frame is incomplete or not, wherein if the human body frame is incomplete, the human body frame is unreasonable. As shown in fig. 5 (a), if the vertex key point, the neck key point, the left shoulder key point, the right shoulder key point, the left elbow key point, the right elbow key point, the left wrist key point, the right wrist key point, the left hip key point, the right hip key point, the left knee key point, the right knee key point, the left ankle key point, the right ankle key point, etc. of the target person are successfully acquired by the NPU, the complete human body frame of the target person can be outlined after connecting these key points according to the human body physiological structure. Conversely, if a human body frame formed by connecting a plurality of key points lacks key points of the head, the shoulders, the legs, and the like, the human body frame may be considered to be unreasonable.
For example, as shown in fig. 5 (b), if the number of the target person is plural, the determining the reasonableness of the body frame of the target person may include: and if the proportion of the persons with reasonable body frames in the target persons meets the preset proportion, determining that the body frames of the target persons are reasonable. It is to be noted that when photographing a plurality of persons, body parts of each person may overlap at the time of imaging. Such as: one of them places the hand on the left shoulder of the other person, which can result in the NPU not being able to capture the other person's left shoulder keypoints. For another example: a person with a high height standing in front of another person with a high height may cause the NPU to fail to acquire key points of the left hip joint, the right hip joint, the left knee, the right knee, the left ankle, the right ankle, and the like of the person with a high height. In this case, the NPU may be trained by using a large amount of similar data based on a deep learning algorithm, so that the NPU has the capability of determining whether the human body frames formed by the key points are reasonable in such a scenario by combining the acquired partial key points according to the relative position relationship between the persons when the body parts of the persons overlap.
Illustratively, if the body frame of the target person is reasonable, the step S202 is continuously executed, and if the body frame of the target person is not reasonable, the process is terminated.
For example, the acquiring of the first body feature information of the target person in the first image may further include performing a normalization process on the first image, and acquiring coordinate values of the body feature of the target person in the normalized first image in a preset coordinate system. As shown in fig. 6, the x-axis direction of the preset coordinate system may be a horizontal direction of the first image, the y-axis direction of the preset coordinate system may be a vertical direction of the first image, and the origin of coordinates of the preset coordinate system may be a point of an edge position at the upper left corner of the first image. The selection of the preset coordinate system in the embodiment of the present application is not limited, for example, the preset coordinate system may also be the x axis and the y axis in the horizontal and vertical directions of the shooting environment, and only the coordinate system shown in fig. 6 is taken as an example for description here.
S202, generating a first characteristic parameter corresponding to the first image according to the first body characteristic information.
It is understood that step S202 may be performed by the CPU104 shown in fig. 1.
For example, the first characteristic parameter is used to identify position information of an image corresponding to the target person in the first image, and pitch angle information when the electronic device captures the target person.
For example, in a first implementation: the first characteristic parameter may include an ankle position y of the target person in the first image in a preset coordinate systemankleAnd anThe shooting pitch angle beta of the electronic equipment is the inclination angle of the electronic equipment. In the embodiment of the application, when the electronic equipment is pitched, the shooting pitch angle can be a negative angle, and when the electronic equipment is pitched, the shooting pitch angle can be a positive angle; when electronic equipment pitches down, this shooting pitch angle also can be positive angle, when electronic equipment pitches up and takes a photograph, this shooting pitch angle also can be negative angle, and this application embodiment does not restrict the positive and negative of shooting pitch angle when bowing or pitching a photograph.
In a second implementation: the first characteristic parameter may further include a horizontal position x of the target person in the first image in a predetermined coordinate systemheadAnd a height h of the character images
It is to be understood that, when the first characteristic parameter corresponding to the first image is generated according to the first body characteristic information, the manner of generating the first characteristic parameter may be different when the number of target persons in the first image is different, and the manner of generating the first characteristic parameter when the number of target persons in the first image is one or more will be described in detail below.
Corresponding to the first implementation manner, if the number of the target persons in the first image is one, the ankle position y in the first characteristic parameterankleMay be the ordinate y of the target person's left ankle keypoint and right ankle keypoint lower pointankle(ii) a The shooting pitch angle β of the electronic device may be a shooting pitch angle at which the electronic device shoots the first image;
corresponding to the second implementation manner, if the number of the target persons in the first image is one, the horizontal position x in the first characteristic parameter isheadMay be the abscissa of the key point of the top of the head of the target character; height h of figure imagesMay be yankleAnd the head vertex key point ordinate of the target person.
Corresponding to the first implementation manner, if the number of the target persons in the first image is multiple, the ankle position y in the first characteristic parameterankleMay be the left ankle key point and the right ankle key point of a plurality of target charactersThe ordinate of the lowest point of the key points may also be an average of the ordinates of the left ankle key points and the right ankle key points of the plurality of target persons, which is not limited in the embodiment of the present application.
Corresponding to the first implementation manner, if the number of the target persons in the first image is multiple, the shooting pitch angle β of the electronic device in the first characteristic parameter is the shooting pitch angle of the electronic device shooting the first image. For example, the shooting pitch angle β of the electronic device may be acquired by an acceleration sensor of the electronic device.
Corresponding to the second implementation manner, if the number of the target persons in the first image is multiple, the horizontal position x in the first characteristic parameterheadMay be an average of the abscissas of the key points of the top of the head of at least two of the plurality of target characters; illustratively, the horizontal position xheadMay be the abscissa x of the vertex key points of a plurality of target charactershead,iOr the average value of (a), or the xheadThe average value of the minimum value and the maximum value of the abscissa of the vertex key points among the plurality of target persons, or the xheadThe average value of the second smallest value and the second largest value of the abscissa of the vertex key points among the plurality of target characters may be used. In the embodiment of the present application, when the number of target persons is multiple, the horizontal position x isheadThe specific acquisition mode of (3) is not limited, and is only an example.
Corresponding to the second implementation manner, if the number of the target persons in the first image is multiple, the person image height h in the first characteristic parametersThe difference value between the ordinate of the lowest point of the key points of the left ankles and the right ankles of the plurality of target persons and the ordinate of the highest point of the key points of the top of the head of the plurality of target persons may be, or the difference value between the average value of the ordinate of the key points of the left ankles and the right ankles of the plurality of target persons and the average value of the ordinate of the key points of the top of the head of the plurality of target persons may be, but is not limited in the embodiment of the present application.
In one implementation, the generating of the first feature parameter corresponding to the first image according to the first body feature information may include: and if the pose of the target person in the first image meets the preset condition, generating a first characteristic parameter corresponding to the first image according to the first body characteristic information.
It should be noted that, in the embodiment of the present application, after first body feature information of a target person in a first image is acquired, it may be determined whether a pose of the target person in the first image meets a preset condition, and when the preset condition is met, a first feature parameter may be generated according to the first body feature information; after the first body feature information of the target person in the first image is acquired, the pose of the target person in the first image is not judged, and the first feature parameter corresponding to the first image is directly generated according to the first body feature information. The embodiment of the application does not limit whether the pose of the target person in the first image meets the preset condition, and can be determined according to practical application.
For example, the preset condition may include: the target person is the whole body and is in a standing posture. The present embodiment is not limited to the specific contents of the above-mentioned preset conditions, and only the whole body and the standing posture are taken as examples for the case of photographing a long leg.
For example, the first characteristic parameter described above in the embodiment of the present application is generated when the pose of the target person in the first image satisfies a preset condition.
For example, if the number of the target persons in the first image is one, the generating of the first characteristic parameter corresponding to the first image according to the first body characteristic information may include: and if the pose of the target person meets the preset condition, generating a first characteristic parameter corresponding to the first image according to the first body characteristic information.
For example, if the number of the target persons in the first image is multiple, the generating the first characteristic parameter corresponding to the first image according to the first body characteristic information may include: and if the proportion of the persons with the positions meeting the preset conditions in the target person is greater than or equal to the preset proportion, generating a first characteristic parameter corresponding to the first image according to the first body characteristic information. The value of the preset proportion is not limited in the embodiment of the application, and can be determined according to practical application. For example, when the preset ratio is 60%, as shown in (b) of fig. 5, the number of target persons in the first image is 5, and when the poses of at least three target persons among the 5 target persons satisfy the full-body and standing posture, it is determined that the poses of the target persons in the first image satisfy the preset condition, and the first feature parameter is generated from the first body feature information.
For example, the above-described determination that the pose of the target person in the first image satisfies the preset condition may include the following condition a and condition B.
The condition a, the pose of the target person in the first image is full-body shooting.
For example, according to the coordinates of the vertex key point, the neck key point, the left ankle key point, and the right ankle key point of the target person in the first image acquired in S201, the coordinates may be coordinates in the preset coordinate system shown in fig. 6 acquired after the normalization processing of the first image. For example, the vertex keypoint coordinates are noted as (x)head,yhead) And the coordinates of the neck key points are recorded as (x)neck,yneck) And the coordinates of the key point of the left ankle are recorded as (x)lankle,ylankle) And the coordinate of the key point of the right ankle is recorded as (x)rankle,yrankle)。
For example, the capturing of the whole body with the pose of the target person in the first image may include: horizontal and vertical coordinates (x) of vertex key pointshead,yhead) Horizontal and vertical coordinates (x) of neck key pointsneck,yneck) And the abscissa (x) of the left ankle key pointlankle,ylankle) Are all larger than 0, or, the horizontal and vertical coordinates (x) of the key points of the vertexhead,yhead) Horizontal and vertical coordinates (x) of neck key pointsneck,yneck) And the abscissa (x) of the right ankle key pointrankle,yrankle) Are all greater than 0.
It should be noted that, the coordinates of the vertex key point, the neck key point, the left ankle key point, and the right ankle key point are all obtained through a human body posture estimation algorithm, if the coordinate value of each key point is greater than 0, it indicates that the key point can be obtained through the human body posture estimation algorithm in the first image, and if the coordinate value of each key point is equal to 0, it indicates that the key point is not in the first image, or the key point cannot be obtained through the human body posture estimation algorithm.
And B, the pose of the target person in the first image is a standing pose.
Illustratively, the coordinates (x) of the key point of the neck of each target person in the first image may be determined based on the coordinates (x) of the key point of the neck of each target person in the first imageneck,yneck) Coordinates of key points of the left hip joint (x)lhip,ylhip) Coordinate of key point of right hip joint (x)rhip,yrhip) Left knee keypoint coordinates (x)lknee,ylknee) Right knee keypoint coordinates (x)rknee,yrknee) Left ankle Key Point coordinate (x)lankle,ylankle) And right ankle keypoint coordinates (x)rankle,yrankle) The upper body vertical height upbody _ y, thigh vertical height thigh _ y, shank vertical height shank _ y, left thigh length leftthigh _ x in the horizontal direction, right thigh length rightthigh _ x in the horizontal direction, left shank length leftshank _ x in the horizontal direction, and right shank length rightshank _ x in the horizontal direction of each target person in the first image are calculated.
For example, the vertical height of the upper body may be: upbody _ y ═ fabs (0.5 ═ y (y)rhip+ylhip)-yneck) (ii) a The thigh vertical height may be: thigh _ y ═ 0.5 × fabs ((y)rknee+ylknee)-(yrhip+ylhip) ); the vertical height of the lower leg can be: shank _ y ═ 0.5 × fabs ((y)rankle+ylankle)-(yrknee+ylknee) ); the length of the left thigh in the horizontal direction may be: leftthigh _ x ═ fabs (x)lknee–xlhip) (ii) a The length of the right thigh in the horizontal direction may be: rightthigh _ x ═ fabs (x)rknee–xrhip) (ii) a The length of the left calf in the horizontal direction can be: leftpeak _ x ═ fabs (x)lknee–xlankle);The length of the right calf in the horizontal direction may be: lightshank _ x ═ fabs (x)rknee–xrankle). The embodiment of the present application is not limited to the specific manner of obtaining the upper half vertical height upbody _ y, the thigh vertical height thigh _ y, the shank vertical height shank _ y, the length leftthigh in the horizontal direction leftthigh _ x, the length rightthigh in the horizontal direction rightthigh _ x, the length leftshanks in the horizontal direction leftshanks _ x, and the length rightshanks in the horizontal direction rightshanks _ x, and is only an example here.
For example, the pose of the target person in the first image being a standing posture may include: the ratio of the vertical height of the upper half body to the vertical height of the thigh is within a first preset threshold range; the ratio of the vertical height of the upper half body to the vertical height of the lower half body is within a second preset threshold range, and the vertical height of the lower half body is the sum of the vertical height of the thigh and the vertical height of the shank; the ratio of the vertical height of the shank to the vertical height of the thigh is within a third preset threshold range; the ratio of the length of the left thigh in the horizontal direction to the vertical height of the upper body is within a fourth preset threshold range; the ratio of the length of the right thigh in the horizontal direction to the vertical height of the upper body is within the fourth preset threshold range; the ratio of the length of the left lower leg in the horizontal direction to the vertical height of the upper body is within a fifth preset threshold range; and the ratio of the length of the right shank in the horizontal direction to the vertical height of the upper body is within the fifth preset threshold range. The specific values of the first preset threshold, the second preset threshold, the third preset threshold, the fourth preset threshold and the fifth preset threshold are not limited in the embodiment of the present application, and the specific values of the preset thresholds may be determined according to actual situations.
It should be noted that, when the number of the target persons in the first image is one, determining that the pose of the target person in the first image meets the preset condition may include that the target person meets the condition a and the condition B; when the number of target persons in the first image is plural, determining that the pose of the target person in the first image satisfies the preset condition may include that the pose of a target person greater than or equal to a preset ratio among the plural target persons satisfies the above condition a and condition B.
S203, if the first characteristic parameter is out of the range of the preset target characteristic parameter, acquiring first deviation information of the first characteristic parameter and the upper limit or the lower limit of the target characteristic parameter.
It is understood that step S203 may be performed by the CPU104 shown in fig. 1.
Exemplary, in one implementation: the target characteristic parameter may include a reference ankle position y of the target person in a preset coordinate system0And a reference photographing pitch angle beta of the electronic device0(ii) a In another implementation: the target characteristic parameter may further include a reference horizontal position x of the target person in a preset coordinate system0And a reference character image height hs0. The specific content included in the target characteristic parameter in the embodiment of the present application is not limited, and is only an exemplary description here.
For example, the range of the preset target characteristic parameter may include: ordinate y of reference ankle position0The range may be a first reference threshold range, reference shooting pitch angle β0May be a second reference threshold range, reference horizontal position x0May be a third reference threshold, referenced to the user height hs0May be a fourth reference threshold range. In the embodiment of the present application, specific values of the first reference threshold range, the second reference threshold range, the third reference threshold range, and the fourth reference threshold range are not limited, and only the first reference threshold range is 0.90 to 0.95, the second reference threshold range is 5 ° to 10 °, the third reference threshold range is 0.5, or 0.66 or 0.33, and the fourth reference threshold range is 0.5 to 0.7 are taken as examples for explanation.
For example, when the number of the target persons in the first image is one or more, the reference horizontal position is determined in a different manner. The following describes in detail the manner of determining the horizontal position with reference to the case where the number of target persons in the first image is one or more, respectively.
Illustratively, when the number of the target persons in the first image is plural, the above-mentioned parameterHorizontal position x of examination0It may be 0.5, i.e. the reference horizontal position is in the middle position of the first image.
For example, when the number of target persons in the first image is one, the reference horizontal position x may be determined based on the angle at which the face of the target person deviates from the straight front0For example, it is possible to ensure sufficient space in the image in the direction in which the face of the target person faces, thereby avoiding the occurrence of a feeling of oppression.
In one implementation: the reference horizontal position x can be determined according to the human face yaw angle by acquiring the human face yaw angle of the targeted person0. For example, in the embodiment of the present application, the yaw angle of the face when the face is directly facing the front may be 0 degree, the yaw angle of the face when the face faces the left side (looking to the left) may be a "negative" angle, and the yaw angle of the face when the face faces the right side (looking to the right) may be a "positive" angle; the yaw angle of the face when the face faces to the left (when viewed to the left) may also be a "positive" angle, and the yaw angle of the face when the face faces to the right (when viewed to the right) may also be a "negative" angle, which is not limited in this embodiment of the application.
Illustratively, if the yaw angle of the human face is within the range of the preset angle interval, the horizontal position x is referred to0May be 0.5; when the yaw angle of the face towards the left side is a positive angle and the yaw angle of the face towards the right side is a negative angle, if the yaw angle of the face is smaller than the minimum value of the range of the preset angle interval, referring to the horizontal position x0Can be 0.33, if the human face yaw angle is larger than the maximum value of the range of the preset angle interval, the horizontal position x is referred to0Is 0.66; or, when the human face yaw angle towards the left side is a negative angle and the human face yaw angle towards the right side is a positive angle, if the human face yaw angle is smaller than the minimum value of the preset angle interval range, referring to the horizontal position x0Is 0.66, if the human face yaw angle is larger than the maximum value of the preset angle interval range, the reference horizontal position x0Is 0.33. The face facing to the left side and the face facing to the right side are defined by the angle of a photographer. The present application is directed to the above-mentioned preset angleThe range of the interval is not limited, and only the preset angle range of (-30 ° or 30 °) is taken as an example for explanation, if the yaw angle of the face toward the left side is a positive angle, the yaw angle of the face toward the right side is a negative angle, and when the yaw angle of the face is 60 °, the horizontal position x is referred to0May be 0.66, and when the human face yaw angle is-60 deg., the reference horizontal position x0May be 0.33.
In one implementation: the first characteristic parameter out of the preset target characteristic parameter range may include: when the first characteristic parameter is included by yankleAnd β, the target characteristic parameter includes y0And beta0When y is presentankleAt y0Outside the corresponding first reference threshold range, and/or β is in β0And determining that the first characteristic parameter is out of the range of the target characteristic parameter when the corresponding second reference threshold is out of the range.
In another implementation: the first characteristic parameter out of the preset target characteristic parameter range may include: when the first characteristic parameter includes yankle、xhead、hsAnd β, the target characteristic parameter includes y0、x0、hs0And beta0When y is presentankleAt y0Outside the corresponding first reference threshold range, and/or β is in β0Outside of the corresponding second reference threshold range, and/or, xheadAt x0Outside of the corresponding third reference threshold, and/or, hsAt hs0And determining that the first characteristic parameter is out of the range of the preset target characteristic parameter outside the range of the corresponding fourth reference threshold.
Exemplary, in one implementation: the first deviation information may include: deviation delta y in the y-axis direction and deviation delta beta of a shooting pitch angle; for example, according to this implementation, the obtaining of the first deviation information of the upper limit or the lower limit of the first characteristic parameter and the target characteristic parameter may include: according to the reference ankle position y of the target person0Upper limit or lower limit of (2) and ankle position y of the target personankleCalculating a difference value to obtain y-axis direction deviation delta y; shoot pitch angle β from reference0And the difference value between the upper limit or the lower limit of the pitch angle and the shooting pitch angle beta is obtained to obtain the shooting pitch angle deviation delta beta.
In another implementation: the first deviation information may further include: the deviation Delta x in the x-axis direction and the distance deviation Delta z. For example, according to this implementation, the obtaining the first deviation information of the first characteristic parameter and the target characteristic parameter may further include: according to the reference horizontal position x of the target person0Horizontal position x with target personheadCalculating a difference value to obtain the deviation delta x in the direction of the x axis; according to the reference person image height h of the target persons0Upper limit or lower limit of (d) and the person image height h of the target personsAnd (5) solving a difference value to obtain a distance deviation delta z.
For example, the obtaining of the first deviation information of the upper limit or the lower limit of the first characteristic parameter and the target characteristic parameter may include: if the first characteristic parameter is smaller than the lower limit of the target characteristic parameter, acquiring first deviation information of the first characteristic parameter and the lower limit of the target characteristic parameter; and if the first characteristic parameter is larger than the upper limit of the target characteristic parameter, acquiring first deviation information of the first characteristic parameter and the upper limit of the target characteristic parameter. For example, reference shooting pitch angle β0The angle range of the angle is 5-10 degrees, if the shooting pitch angle beta is 3 degrees, the shooting pitch angle deviation delta beta is 5-3 degrees-2 degrees; and if the shooting pitch angle beta is 15 degrees, the shooting pitch angle deviation delta beta is 10 degrees to 15 degrees to 5 degrees.
And S204, generating first prompt information according to the first deviation information.
It is understood that step S204 may be performed by the CPU104 shown in fig. 1.
The first prompt message is used for guiding the photographer to move the electronic equipment to photograph the target person, so that the required photographing effect is achieved. For example, the presentation form of the prompt message may include a plurality of presentation forms such as a text prompt, a graphic prompt, a combination of the text prompt and the graphic prompt, and the embodiment of the present application does not limit this.
For example, the first prompt message may include movement indication information indicating a direction and/or distance in which the photographer moves the electronic device and/or rotation indication information indicating a direction and/or angle in which the photographer rotates the electronic device.
For example, the moving direction may include left, right, up, down, left up, left down, right up, right down, far away from the photographer, close to the photographer, and the like, and the present embodiment is not limited thereto, and the left, right, up, down, and the like directions are all defined by angles at which the photographer moves the camera, the indication of the directions is set with respect to the target position corresponding to the target feature parameter, and the directions are only approximate directions and are not absolute.
For example, the generating the first prompt information according to the first deviation information may include: and generating movement indication information according to the first deviation information. If the deviation Delta x in the x-axis direction is less than 0, the movement indication information is used for indicating a photographer to move the camera of the electronic equipment to the left; if the deviation Delta x in the x-axis direction is greater than 0, the movement indication information is used for indicating the photographer to move the camera of the electronic equipment rightwards; if the y-axis direction deviation delta y is smaller than 0, the movement indication information is used for indicating a photographer to move a camera of the electronic equipment upwards; if the deviation delta y in the y-axis direction is larger than 0, the movement indication information is used for indicating a photographer to move a camera of the electronic equipment downwards; if the distance deviation deltaz is less than 0, the movement indication information is used for indicating the photographer to move the electronic equipment away from the person to be photographed; if the distance deviation Δ z is greater than 0, the movement instruction information is used for instructing the photographer to bring the electronic device close to the person to be photographed.
For example, the movement instruction information may be instruction information of a single dimensional direction, or may be instruction information of a plurality of dimensional directions in combination. For example, as shown in fig. 7, if the horizontal position and ankle position of the target person in the first image are not within the target feature parameter range, the movement instruction information may be the instruction information in a single direction shown in (a) in fig. 7, including the instruction information of moving rightward and downward, or may be the instruction information of moving rightward and downward shown in (b) in fig. 7.
For example, the rotation instruction information may be used to instruct the photographer to rotate the electronic device to change the shooting pitch angle, and the rotation direction is only a general direction and is not absolute.
For example, the generating the first prompt information according to the first deviation information may further include: and generating rotation indication information according to the first deviation information. If the shooting pitch angle deviation delta beta is smaller than 0, the rotation indication information is used for indicating a photographer to reduce the shooting pitch angle of the electronic equipment; and if the shooting pitch angle deviation delta beta is larger than 0, the rotation indication information is used for indicating a photographer to increase the shooting pitch angle of the electronic equipment.
Exemplarily, taking a shooting pitch angle during pitch shooting as a negative angle and a shooting pitch angle during pitch shooting as a positive angle as an example, the shooting pitch angle in the rotation indication information becoming smaller may refer to a pitching inclination angle of the electronic device becoming smaller, and the shooting pitch angle in the prompt information becoming larger may include: if the pitch angle is a negative angle (pitch) when the electronic device shoots the first image, the electronic device may be moved to pitch, or if the pitch angle is a positive angle or 0 degree when the electronic device shoots the first image, the tilt angle of the pitch may be increased.
For example, the movement indication information and/or the rotation indication information included in the first prompt information may prompt the photographer in a form of presentation combining a text prompt and a graphic prompt. For example, as shown in fig. 8, a dotted circle is a graph corresponding to the target characteristic parameter, a solid circle is a graph corresponding to the first characteristic parameter, a solid circle and a dotted circle do not overlap each other to indicate that the current first characteristic parameter is not within a preset range of the target characteristic parameter, a difference in the x-axis direction between the solid circle and the dotted circle indicates a deviation in the x-axis direction, a difference in the y-axis direction between the solid circle and the dotted circle indicates a deviation in the y-axis direction, a difference in the size between the solid circle and the dotted circle indicates a deviation in the distance from the far side to the near side, an arrow on the implementation circle indicates a smaller pitch angle clockwise, and a larger pitch angle counterclockwise. The photographer can move the electronic device according to the graphical prompt. It can be understood that when the photographer moves the electronic device according to the prompt information on the display of the electronic device, when the solid line circle on the display coincides with the dotted line circle, or the difference between the solid line circle and the dotted line circle is within a certain threshold range, it may indicate that the feature parameter corresponding to the current image is within the range of the target feature parameter.
For example, as shown in fig. 8, the text prompt may be displayed according to the priority order of different indication information. For example, if the priority order decreases in sequence according to the distance (height of the person image), the shooting pitch angle, the y-axis direction (ankle position), and the x-axis direction (horizontal position), and the first characteristic parameter is not within the range of the target characteristic parameter, in one implementation: as shown in fig. 8 (a), all the images can be displayed on the display screen of the electronic device in the order of distance (height of the person image), shooting pitch angle, y-axis direction (ankle position), and x-axis direction (horizontal position); in another implementation: as shown in fig. 8 (b), only the indication information having the higher priority may be displayed on the display screen of the electronic device in the order of priority. The priority order of different indication information and the display mode of the indication information on the electronic device are not limited in the embodiments of the present application, and are only exemplary descriptions here.
It can be understood that the photographer can move the electronic device from the first pose to the second pose according to the first prompt information.
Further, the generating the first prompt information according to the first deviation information in step S204 may include: acquiring state information of the electronic equipment through a sensor; and generating first prompt information according to the state information and the first deviation information. Illustratively, this status information may be obtained by the sensor unit 105 in fig. 1.
It can be understood that due to the factors of revolution and rotation of the earth, the electronic device is not absolutely stationary, and has motion states such as gravitational acceleration, angular velocity and the like. The photographer, after adjusting the electronic device according to the reminder, takes a second image at a time that is different from the time that the first image was taken, and therefore, generating the reminder only from the deviation information obtained from the image itself, without considering the movement of the electronic device within the time difference, may not be accurate enough. In this embodiment, in addition to the deviation information obtained from the image itself, the first prompt information may be generated in combination with the state information of the electronic device obtained by the sensor to instruct the photographer to adjust the electronic device.
According to the image shooting method, the first body characteristic information of the target person in the first image is obtained through operation based on the convolutional neural network; generating a first characteristic parameter corresponding to the first image according to the first body characteristic information; if the first characteristic parameter is out of the range of the preset target characteristic parameter, acquiring first deviation information of the upper limit or the lower limit of the first characteristic parameter and the target characteristic parameter; and generating first prompt information according to the first deviation information, wherein the first prompt information is used for guiding the photographer to move the electronic equipment to photograph the target person. According to the embodiment of the application, when the first characteristic parameter is out of the range of the preset target characteristic parameter, the deviation information of the first characteristic parameter and the target characteristic parameter can be obtained, and the prompt information with different dimensionalities is generated to guide a photographer to shoot a photo with personalized characteristics.
The present application further provides an embodiment, as shown in fig. 9, before step S201, the method further includes: and step S200.
S200, carrying out face recognition on the target person in the first image based on convolutional neural network operation, and if the face of the target person is recognized, continuing to execute the step S201; and if the face of the target person is not recognized, the flow is terminated.
It is understood that step S200 may be performed by an NPU (operation array 103 shown in fig. 1), or step S901 may also be performed by the CPU104 or GPU shown in fig. 1, or the like.
It should be noted that a Convolutional Neural Network (CNN) is a feed-forward Neural Network, and its artificial neurons can respond to a part of surrounding units in the coverage range, and usually perform well for large-scale image processing. After the convolutional neural network is trained (or deeply learned) through a large amount of data, the face is recognized by adopting convolutional neural network operation, and the accuracy rate is greatly improved.
According to the image shooting method, the target person in the first image is subjected to face recognition through operation based on the convolutional neural network; if the face of the target person is identified, acquiring first body feature information of the target person in the first image based on convolutional neural network operation; generating a first characteristic parameter corresponding to the first image according to the first body characteristic information; if the first characteristic parameter is out of the range of the preset target characteristic parameter, acquiring first deviation information of the upper limit or the lower limit of the first characteristic parameter and the target characteristic parameter; and generating first prompt information according to the first deviation information, wherein the first prompt information is used for guiding the photographer to move the electronic equipment to photograph the target person. According to the embodiment of the application, when the face of a target person is recognized, the first body characteristic information is obtained, when the first characteristic parameter is out of the range of the preset target characteristic parameter, the deviation information of the first characteristic parameter and the target characteristic parameter is obtained, prompt information of different dimensionalities is generated, and a photographer is guided to shoot a photo with personalized characteristics.
The present application further provides an embodiment, as shown in fig. 10, the method may further include steps S1001-S1004.
S1001, acquiring second body characteristic information of the target person in the second image based on the convolutional neural network operation.
It is understood that step S1001 may be performed by an NPU (operation array 103 shown in fig. 1), or step S1001 may also be performed by the CPU104 or GPU or the like shown in fig. 1.
For example, the second image may be an image of the target person captured after the electronic device is adjusted according to the first prompt information. For example, the second image may be an image captured by the electronic device in a second pose, where the second pose is adjusted by the electronic device according to the first prompt information.
It will be appreciated that the targeted person in the second image may be the same as the targeted person in the first image, but the pose of the targeted person in the first image may be different from the pose of the targeted person in the second image.
For example, the second body feature information of each target person in the second image may be obtained by a human posture estimation algorithm based on a deep learning technique. For example, the coordinate values of the vertex key point, the neck key point, the left shoulder key point, the right shoulder key point, the left elbow key point, the right elbow key point, the left wrist key point, the right wrist key point, the left hip key point, the right hip key point, the left knee key point, the right knee key point, the left ankle key point, and the right ankle key point of each target person in the second image in the preset coordinate system may be acquired. It should be noted that, since the second image is an image of the target person captured after the electronic device is adjusted according to the first prompt information, the coordinate values of the second physical characteristic information of the target person in the second image and the first physical characteristic information of the target person in the first image may be different.
And S1002, generating a second characteristic parameter corresponding to the second image according to the second body characteristic information.
It is understood that step S1002 may be performed by the CPU104 shown in fig. 1.
The second characteristic parameter is used for identifying position information of an image corresponding to the target person in the second image, and pitch angle information when the electronic device captures the target person in the second image.
The implementation of step S1002 differs from the implementation of step S202 in that: the implementation manner of step S1002 does not include determining whether the pose of the target person in the second image satisfies the preset condition, but directly generates the second feature parameter according to the second body feature information after the second body feature information of the target person in the second image is acquired. Otherwise, other implementation manners of step S1002 are the same as the implementation manner of step S202, and reference may be specifically made to the description of step S202, which is not repeated herein.
S1003, if the second characteristic parameter is out of the preset range of the target characteristic parameter, second deviation information of the second characteristic parameter and the upper limit or the lower limit of the target characteristic parameter is obtained.
It is understood that step S1003 may be executed by the CPU104 shown in fig. 1.
It should be noted that the implementation manner of acquiring the second deviation information in step S1003 is the same as the implementation manner of acquiring the first deviation information in step S203, and specific reference may be made to the description of step S203, which is not described herein again.
And S1004, generating second prompting information according to the second deviation information.
It is understood that step S1004 may be performed by the CPU104 shown in fig. 1.
The second prompt message is used for guiding the photographer to move the electronic equipment to photograph the target person.
It should be noted that, an implementation manner of generating the second prompt information according to the second deviation information in the step S1004 is the same as an implementation manner of generating the first prompt information according to the first deviation information in the step S204, and reference may be specifically made to the description of the step S204, which is not repeated herein.
It can be understood that the photographer can move the electronic device from the second pose to the third pose according to the second prompt information.
According to the image shooting method, the second body characteristic information of the target person in the second image is obtained through operation based on the convolutional neural network; generating a second characteristic parameter corresponding to the second image according to the second body characteristic information; if the second characteristic parameter is out of the range of the preset target characteristic parameter, second deviation information of the second characteristic parameter and the upper limit or the lower limit of the target characteristic parameter is obtained; and generating second prompt information according to the second deviation information, wherein the second prompt information is used for guiding the photographer to move the electronic equipment to photograph the target person. According to the embodiment of the application, when the second characteristic parameter corresponding to the second image is out of the range of the target characteristic parameter, the deviation information of the upper limit or the lower limit of the second characteristic parameter and the target characteristic parameter can be obtained, the prompt information of different dimensionalities can be generated, and a photographer is guided to shoot a photo with personalized characteristics.
The present application further provides another embodiment, after being executed according to the method steps shown in fig. 2 and/or fig. 9 and fig. 10, the image capturing method may further include: acquiring body characteristic information of a target person in an image shot by a shooting person after the shooting person adjusts the electronic equipment according to the second prompt information, acquiring characteristic parameters corresponding to the image according to the body characteristic information of the target person in the image, and if the characteristic parameters corresponding to the image are not out of the range of preset target characteristic parameters, indicating that the image can not be optimized any more, and no longer giving prompt information to the shooting person or prompting the shooting person that the pose of the current electronic equipment meets the user requirements; if the characteristic parameter corresponding to the image is outside the range of the preset target characteristic parameter, indicating that the image does not meet the user requirement, new prompt information of the photographer can be given according to the image shooting method shown in fig. 10 until the characteristic parameter corresponding to the shot image is within the range of the preset target characteristic parameter. As shown in fig. 11, the star marks in fig. 11 indicate that the image is an image satisfying the range of the target characteristic parameter, and when the photographer sees the prompt, the photographer can press the shutter to take a photo satisfying the personalized requirement of the user.
According to the image shooting method, the body characteristic information of the target person in the image is obtained through operation based on the convolutional neural network; generating characteristic parameters corresponding to the images according to the body characteristic information; if the characteristic parameter is out of the range of the preset target characteristic parameter, acquiring deviation information of the upper limit or the lower limit of the characteristic parameter and the target characteristic parameter; and generating prompt information according to the deviation information, wherein the prompt information is used for guiding the photographer to move the electronic equipment to photograph the target person. According to the embodiment of the application, when the characteristic parameter corresponding to the image is out of the range of the target characteristic parameter, the deviation information of the upper limit or the lower limit of the characteristic parameter and the target characteristic parameter can be obtained, the prompt information with different dimensionalities can be generated, and a photographer is guided to shoot a photo with personalized characteristics.
The above description has mainly introduced the scheme provided in the embodiments of the present application from the perspective of method steps. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art would readily appreciate that the present application is capable of being implemented as a combination of hardware and computer software for carrying out the various example elements and algorithm steps described in connection with the embodiments disclosed herein. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the present application may perform division of the function modules for the image capturing apparatus according to the method example, for example, each function module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
In the case of dividing each functional module by corresponding functions, fig. 12 shows a schematic diagram of a possible structure of the image capturing apparatus 1200 according to the above-described embodiment, and the image capturing apparatus 1200 includes: an acquisition unit 1201 and a processing unit 1202. The acquisition unit 1201 is configured to support the image capturing apparatus 1200 to execute S201 in fig. 2 or S1001 in fig. 10; the processing unit 1202 is configured to support the image capturing apparatus 1200 to execute S202 to S204 in fig. 2, or S200 in fig. 9, or S1002 to S1004 in fig. 10. All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
In the case of an integrated unit, fig. 13 shows a schematic diagram of a possible configuration of the image capture device 1300 according to the above-described embodiment. The image capturing apparatus 1300 includes: memory 1301 and processor 1302. The processor 1302 is configured to control and manage the actions of the image capture device 1300, for example, the processor 1302 is configured to support the image capture device 1300 to perform S201-S204 in fig. 2, or S200-S204 in fig. 9, or S1001-S1004 in fig. 10, and/or other processes for the techniques described herein. A memory 1301 for storing program codes and data of the computer. In another implementation, the computer structure according to the above embodiments may further include a processor and an interface, where the processor is in communication with the interface, and the processor is configured to execute the embodiments of the present application. The processor may be at least one of a Central Processing Unit (CPU), a Digital Signal Processor (DSP), a Microcontroller (MCU), or a microprocessor.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware or in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, Erasable Programmable read-only Memory (EPROM), Electrically Erasable Programmable read-only Memory (EEPROM), registers, a hard disk, a removable disk, a compact disc read-only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a core network interface device. Of course, the processor and the storage medium may reside as discrete components in a core network interface device.
Those skilled in the art will recognize that in one or more of the examples described above, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present application should be included in the scope of the present application.

Claims (46)

  1. An image capturing method applied to an electronic apparatus having an image capturing function, the method comprising:
    acquiring first body feature information of a target person in the first image based on convolutional neural network operation;
    generating a first characteristic parameter corresponding to the first image according to the first body characteristic information, wherein the first characteristic parameter is used for identifying position information of the image corresponding to the target person in the first image and pitch angle information of the electronic equipment when the electronic equipment shoots the target person;
    if the first characteristic parameter is out of the range of a preset target characteristic parameter, acquiring first deviation information of the first characteristic parameter and the upper limit or the lower limit of the target characteristic parameter;
    generating first prompt information according to the first deviation information; the first prompt message is used for guiding a photographer to move the electronic equipment to photograph the target person.
  2. The image capturing method according to claim 1, before the step of acquiring the first body feature information of the target person in the first image based on a convolutional neural network operation, further comprising:
    performing face recognition on the target person based on convolutional neural network operation;
    if the face of the target person is identified, acquiring the first body feature information;
    and if the face of the target person is not recognized, terminating.
  3. The image capturing method according to claim 1 or 2, wherein the first body feature information is a plurality of key points of the target person; before the step of generating a first feature parameter corresponding to the first image according to the first body feature information, the method further includes:
    determining the reasonability of a human body frame formed by connecting the plurality of key points based on the operation of a convolutional neural network;
    if the human body frame of the target person is reasonable, generating a first characteristic parameter corresponding to the first image according to the first body characteristic information;
    and if the human body frame of the target person is not reasonable, terminating.
  4. The image capturing method according to any one of claims 1 to 3, characterized in that the first feature parameter is generated in a case where a pose of the target person in the first image satisfies a preset condition.
  5. The image capturing method according to claim 4, wherein the preset condition includes: the target person is the whole body and is in a standing posture.
  6. The image capturing method according to any one of claims 1 to 5, characterized in that the method further comprises:
    acquiring second body feature information of the target person in a second image based on convolutional neural network operation; the second image is the image of the target person shot after the electronic equipment is adjusted according to the first prompt information;
    generating a second characteristic parameter corresponding to the second image according to the second body characteristic information;
    if the second characteristic parameter is out of the range of the preset target characteristic parameter, second deviation information of the second characteristic parameter and the upper limit or the lower limit of the target characteristic parameter is obtained;
    generating second prompt information according to the second deviation information; the second prompt message is used for guiding a photographer to move the electronic equipment to photograph the target person.
  7. The image capturing method according to any one of claims 4 to 6,
    if the number of the target persons in the first image is multiple, generating a first characteristic parameter corresponding to the first image according to the first body characteristic information comprises:
    if the proportion of the persons with the positions meeting the preset conditions in the target person is larger than or equal to the preset proportion; generating a first characteristic parameter corresponding to the first image according to the first body characteristic information.
  8. The image capturing method according to any one of claims 1 to 7, wherein the first body feature information includes a left ankle key point and a right ankle key point of the target person in the first image, and the first feature parameter includes an ankle position y of the target person in the first image in a preset coordinate systemankleAnd a shooting pitch angle beta of the electronic device, wherein the target characteristic parameter comprises a reference ankle position y of the target person in the preset coordinate system0And a reference photographing pitch angle beta of the electronic device0The first deviation information includes: the y-axis direction deviation Δ y and the shooting pitch angle deviation Δ β, and correspondingly, the acquiring of the first deviation information of the upper limit or the lower limit of the first characteristic parameter and the target characteristic parameter includes:
    according to the reference ankle position y of the target person0And the ankle position y of the target personankleCalculating a difference value to obtain the deviation delta y in the y-axis direction; according to the reference shooting pitch angle beta0And the upper limit or the lower limit of (3) and the photographingAnd solving a difference value of the pitch angle beta to obtain the shooting pitch angle deviation delta beta.
  9. The image capturing method according to claim 8, wherein the first body feature information further includes an overhead key point of the target person in the first image, and the first feature parameter further includes a horizontal position x of the target person in the first image in a preset coordinate systemheadAnd a height h of the character imagesThe target characteristic parameter further comprises a reference horizontal position x of the target person in the preset coordinate system0And a reference character image height hs0The first deviation information further includes: the x-axis direction deviation Δ x and the distance deviation Δ z, and accordingly, the acquiring of the first deviation information of the upper limit or the lower limit of the first characteristic parameter and the target characteristic parameter includes:
    according to the reference horizontal position x of the target person0And the horizontal position x of the target personheadCalculating a difference value to obtain the deviation delta x in the x-axis direction; according to the reference person image height h of the target persons0And the height h of the person image of the target personsAnd calculating a difference value to obtain the distance deviation delta z.
  10. The image capturing method according to claim 8,
    if the number of the target characters is one, yankleThe vertical coordinates of the lower points of the key points of the left ankle and the right ankle of the target person are shown; the beta is a shooting pitch angle of the first image shot by the electronic equipment;
    if the number of the target characters is multiple, yankleA vertical coordinate of a lowest point of a plurality of the target person's left ankle keypoints and right ankle keypoints; the beta is a shooting pitch angle of the first image shot by the electronic equipment.
  11. The image capturing method according to claim 9,
    if the number of the target characters is one, the x isheadThe abscissa of the key point at the top of the head of the target person is taken as the abscissa; h issIs said yankleThe difference value of the head top key point vertical coordinate of the target person;
    if the number of the target characters is multiple, the xheadAveraging the abscissas of the vertex keypoints of at least two of the plurality of target characters; h issIs said yankleA difference from a vertical coordinate of a highest point among the vertex keypoints of the plurality of target characters.
  12. The image capturing method according to any one of claims 1 to 11, wherein the first prompt information includes: movement indication information and/or rotation indication information, wherein the movement indication information is used for indicating a photographer to move the electronic equipment; the rotation indication information is used for indicating a photographer to rotate the electronic equipment.
  13. The image capturing method according to any one of claims 1 to 12, wherein if the number of the target person is one, the method further includes:
    acquiring a human face yaw angle of the target person;
    if the human face yaw angle is within the range of the preset angle interval, the reference horizontal position x0Is a first predetermined threshold;
    if the human face yaw angle is smaller than the minimum value of the preset angle interval range, the reference horizontal position x0Is a second preset threshold value, and the second preset threshold value is greater than the first preset threshold value; if the human face yaw angle is larger than the maximum value of the range of the preset angle interval, the reference horizontal position x0Is a third preset threshold, which is smaller than the first preset threshold; alternatively, the first and second electrodes may be,
    if the human face yaw angle is smaller than the minimum value of the preset angle interval range, the reference horizontal position x0Is that it isA third preset threshold; if the human face yaw angle is larger than the maximum value of the range of the preset angle interval, the reference horizontal position x0And is the second preset threshold.
  14. The image capturing method according to claims 1 to 13, wherein generating first prompt information based on the first deviation information includes:
    acquiring state information of the electronic equipment through a sensor;
    and generating the first prompt message according to the state information and the first deviation information.
  15. The image capturing method according to claim 12, wherein the sensor includes: at least one of a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, a gravitational acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, or a bone conduction sensor.
  16. An image photographing apparatus, comprising an arithmetic array and a central processing unit CPU,
    the operation array is used for acquiring first body characteristic information of a target person in the first image based on convolutional neural network operation;
    the CPU is used for generating a first characteristic parameter corresponding to the first image according to the first body characteristic information, and the first characteristic parameter is used for identifying the position information of the image corresponding to the target person in the first image and the shooting pitch angle information of the electronic equipment; if the first characteristic parameter is out of the range of a preset target characteristic parameter, acquiring first deviation information of the first characteristic parameter and the upper limit or the lower limit of the target characteristic parameter; generating first prompt information according to the first deviation information; the first prompt message is used for guiding a photographer to move the electronic equipment to photograph the target person.
  17. The image capturing apparatus according to claim 16, wherein the operation array is further configured to perform face recognition on the target person based on a convolutional neural network operation; if the face of the target person is identified, acquiring the first body feature information; and if the face of the target person is not recognized, terminating.
  18. The image capturing apparatus according to claim 16 or 17, wherein the first body feature information is a plurality of key points of the target person;
    the operation array is also used for determining the reasonability of a human body frame formed by connecting the plurality of key points based on the operation of a convolutional neural network; if the human body frame of the target person is reasonable, generating a first characteristic parameter corresponding to the first image according to the first body characteristic information; and if the human body frame of the target person is not reasonable, terminating.
  19. The image capturing apparatus according to any one of claims 16 to 18, wherein the first feature parameter is generated in a case where a pose of the target person in the first image satisfies a preset condition.
  20. The image capturing apparatus according to claim 19, wherein the preset condition includes: the target person is the whole body and is in a standing posture.
  21. The image capturing apparatus according to any one of claims 16 to 20,
    the operation array is further used for acquiring second body characteristic information of the target person in a second image based on convolutional neural network operation; the second image is the image of the target person shot after the electronic equipment is adjusted according to the first prompt information;
    the CPU is further used for generating a second characteristic parameter corresponding to the second image according to the second body characteristic information; if the second characteristic parameter is out of the range of the preset target characteristic parameter, second deviation information of the second characteristic parameter and the upper limit or the lower limit of the target characteristic parameter is obtained; generating second prompt information according to the second deviation information; the second prompt message is used for guiding a photographer to move the electronic equipment to photograph the target person.
  22. The image capturing apparatus according to any one of claims 19 to 21,
    the CPU is specifically configured to generate a first characteristic parameter corresponding to the first image according to the first body characteristic information if the proportion occupied by the person whose posture in the target person meets the preset condition is greater than or equal to a preset proportion.
  23. The image capturing apparatus as claimed in any one of claims 16 to 22, wherein the first feature information includes a key point of a left ankle and a key point of a right ankle of the target person in the first image, and the first feature parameter includes an ankle position y of the target person in the first image in a predetermined coordinate systemankleAnd a shooting pitch angle beta of the electronic device, wherein the target characteristic parameter comprises a reference ankle position y of the target person in the preset coordinate system0And a reference photographing pitch angle beta of the electronic device0The first deviation information includes: the deviation delta y of the y-axis direction and the deviation delta beta of the shooting pitch angle,
    the CPU is specifically used for referencing the ankle position y of the target person0And the ankle position y of the target personankleCalculating a difference value to obtain the deviation delta y in the y-axis direction; according to the reference shooting pitch angle beta0And calculating a difference value between the upper limit or the lower limit of the pitch angle and the shooting pitch angle beta to obtain the shooting pitch angle deviation delta beta.
  24. The image capture device of claim 23, wherein the second component is a lens assemblyThe body feature information further comprises the key point of the head of the target person in the first image, and the first feature parameter further comprises the horizontal position x of the target person in the first image in a preset coordinate systemheadAnd a height h of the character imagesThe target characteristic parameter further comprises a reference horizontal position x of the target person in the preset coordinate system0And a reference character image height hs0The first deviation information further includes: the deviation Delta x in the direction of the x axis and the distance deviation Delta z,
    the CPU is specifically used for determining the reference horizontal position x of the target person0And the horizontal position x of the target personheadCalculating a difference value to obtain the deviation delta x in the x-axis direction; according to the reference person image height h of the target persons0And the height h of the person image of the target personsAnd calculating a difference value to obtain the distance deviation delta z.
  25. The image capturing apparatus according to claim 23,
    if the number of the target characters is one, yankleThe vertical coordinates of the lower points of the key points of the left ankle and the right ankle of the target person are shown; the beta is a shooting pitch angle of the first image shot by the electronic equipment;
    if the number of the target characters is multiple, yankleA vertical coordinate of a lowest point of the left ankle keypoints and the right ankle keypoints of the plurality of target characters; the beta is a shooting pitch angle of the first image shot by the electronic equipment.
  26. The image capturing apparatus according to claim 24,
    if the number of the target characters is one, the x isheadThe abscissa of the key point at the top of the head of the target person is taken as the abscissa; h issIs said yankleThe difference value of the head top key point vertical coordinate of the target person;
    if the target person isA plurality of, xheadAveraging the abscissa of the vertex key points of at least two of the plurality of target characters; h issIs said yankleA difference from a vertical coordinate of a highest point among the vertex keypoints of the plurality of target characters.
  27. The image capturing apparatus according to any one of claims 16 to 26, wherein the first prompt information includes: movement indication information and/or rotation indication information, wherein the movement indication information is used for indicating a photographer to move the electronic equipment; the rotation indication information is used for indicating a photographer to rotate the electronic equipment.
  28. The image capturing apparatus as claimed in any one of claims 16 to 27, wherein the operation array is further configured to obtain a yaw angle of the target person if the number of the target persons is one;
    if the human face yaw angle is within the range of the preset angle interval, the reference horizontal position x0Is a first preset threshold value;
    if the human face yaw angle is smaller than the minimum value of the preset angle interval range, the reference horizontal position x0Is a second preset threshold value, and the second preset threshold value is greater than the first preset threshold value; if the human face yaw angle is larger than the maximum value of the range of the preset angle interval, the reference horizontal position x0Is a third preset threshold, which is smaller than the first preset threshold; alternatively, the first and second electrodes may be,
    if the human face yaw angle is smaller than the minimum value of the preset angle interval range, the reference horizontal position x0The third preset threshold value is set; if the human face yaw angle is larger than the maximum value of the range of the preset angle interval, the reference horizontal position x0And is the second preset threshold.
  29. The image capturing device according to any one of claims 16 to 28, wherein the CPU is configured to obtain status information of the electronic apparatus via a sensor; and generating the first prompt message according to the state information and the first deviation information.
  30. The image capturing apparatus according to claim 29, wherein the sensor includes: at least one of a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, a gravitational acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, or a bone conduction sensor.
  31. An image capturing apparatus, characterized in that the apparatus comprises:
    the acquiring unit is used for acquiring first body characteristic information of a target person in the first image based on convolutional neural network operation;
    the processing unit is used for generating a first characteristic parameter corresponding to the first image according to the first body characteristic information, wherein the first characteristic parameter is used for identifying the position information of the image corresponding to the target person in the first image and the shooting pitch angle information of the electronic equipment; if the first characteristic parameter is out of the range of a preset target characteristic parameter, acquiring first deviation information of the first characteristic parameter and the upper limit or the lower limit of the target characteristic parameter; generating first prompt information according to the first deviation information; the first prompt message is used for guiding a photographer to move the electronic equipment to photograph the target person.
  32. The image capturing apparatus of claim 31, wherein the processing unit is further configured to perform face recognition on the target person based on a convolutional neural network operation; if the face of the target person is identified, acquiring the first body feature information; and if the face of the target person is not recognized, terminating.
  33. The image capturing apparatus according to claim 31 or 32, wherein the first body feature information is a plurality of key points of the target person;
    the processing unit is further used for determining the reasonability of a human body frame formed by connecting the plurality of key points based on convolutional neural network operation; if the human body frame of the target person is reasonable, generating a first characteristic parameter corresponding to the first image according to the first body characteristic information; and if the human body frame of the target person is not reasonable, terminating.
  34. The image capturing apparatus according to any one of claims 31 to 33, wherein the first feature parameter is generated in a case where a pose of the target person in the first image satisfies a preset condition.
  35. The image capturing apparatus according to claim 34, wherein the preset condition includes: the target person is the whole body and is in a standing posture.
  36. The image capturing apparatus according to any one of claims 31 to 35,
    the acquisition unit is further used for acquiring second body characteristic information of the target person in a second image based on convolutional neural network operation; the second image is the image of the target person shot after the electronic equipment is adjusted according to the first prompt information;
    the processing unit is further used for generating a second characteristic parameter corresponding to the second image according to the second body characteristic information; if the second characteristic parameter is out of the range of the preset target characteristic parameter, second deviation information of the second characteristic parameter and the upper limit or the lower limit of the target characteristic parameter is obtained; generating second prompt information according to the second deviation information; the second prompt message is used for guiding a photographer to move the electronic equipment to photograph the target person.
  37. The image capturing apparatus according to any one of claims 34 to 36,
    the processing unit is specifically configured to generate a first feature parameter corresponding to the first image according to the first body feature information if the proportion occupied by the person whose posture in the target person meets the preset condition is greater than or equal to a preset proportion.
  38. The image capturing apparatus as claimed in any one of claims 31 to 37, wherein the first feature information includes a key point of a left ankle and a key point of a right ankle of the target person in the first image, and the first feature parameter includes an ankle position y of the target person in the first image in a predetermined coordinate systemankleAnd a shooting pitch angle beta of the electronic device, wherein the target characteristic parameter comprises a reference ankle position y of the target person in the preset coordinate system0And a reference photographing pitch angle beta of the electronic device0The first deviation information includes: the deviation delta y of the y-axis direction and the deviation delta beta of the shooting pitch angle,
    the processing unit is specifically configured to determine a reference ankle position y of the target person0And the ankle position y of the target personankleCalculating a difference value to obtain the deviation delta y in the y-axis direction; according to the reference shooting pitch angle beta0And calculating a difference value between the upper limit or the lower limit of the pitch angle and the shooting pitch angle beta to obtain the shooting pitch angle deviation delta beta.
  39. The image capturing apparatus as claimed in claim 38, wherein the first body feature information further includes an overhead key point of the target person in the first image, and the first feature parameter further includes a horizontal position x of the target person in the first image in a preset coordinate systemheadAnd a height h of the character imagesThe target characteristic parameter further comprises a reference horizontal position x of the target person in the preset coordinate system0And a reference character image height hs0The first deviation information further includes: deviation in x-axis directionThe difference Deltax and the near-far distance deviation Deltaz,
    the processing unit is specifically configured to determine the reference horizontal position x of the target person0And the horizontal position x of the target personheadCalculating a difference value to obtain the deviation delta x in the x-axis direction; according to the reference person image height h of the target persons0And the height h of the person image of the target personsAnd calculating a difference value to obtain the distance deviation delta z.
  40. The image capturing apparatus according to claim 38,
    if the number of the target characters is one, yankleThe vertical coordinates of the lower points of the key points of the left ankle and the right ankle of the target person are shown; the beta is a shooting pitch angle of the first image shot by the electronic equipment;
    if the number of the target characters is multiple, yankleA vertical coordinate of a lowest point of a plurality of the target person's left ankle keypoints and right ankle keypoints; the beta is a shooting pitch angle of the first image shot by the electronic equipment.
  41. The image capturing apparatus according to claim 39,
    if the number of the target characters is one, the x isheadThe abscissa of the key point at the top of the head of the target person is taken as the abscissa; h issIs said yankleThe difference value of the head top key point vertical coordinate of the target person;
    if the number of the target characters is multiple, the xheadAveraging the abscissas of the vertex keypoints of at least two of the plurality of target characters; h issIs said yankleA difference from a vertical coordinate of a highest point among the vertex keypoints of the plurality of target characters.
  42. The image capturing apparatus according to any one of claims 31 to 41, wherein the first prompt information includes: movement indication information and/or rotation indication information, wherein the movement indication information is used for indicating a photographer to move the electronic equipment; the rotation indication information is used for indicating a photographer to rotate the electronic equipment.
  43. The image capturing apparatus as claimed in any one of claims 31 to 42, wherein the obtaining unit is further configured to obtain a yaw angle of the face of the target person if the number of the target persons is one;
    if the human face yaw angle is within the range of the preset angle interval, the reference horizontal position x0Is a first preset threshold value;
    if the human face yaw angle is smaller than the minimum value of the preset angle interval range, the reference horizontal position x0Is a second preset threshold value, and the second preset threshold value is greater than the first preset threshold value; if the human face yaw angle is larger than the maximum value of the range of the preset angle interval, the reference horizontal position x0Is a third preset threshold, which is smaller than the first preset threshold; alternatively, the first and second electrodes may be,
    if the human face yaw angle is smaller than the minimum value of the preset angle interval range, the reference horizontal position x0The third preset threshold value is set; if the human face yaw angle is larger than the maximum value of the range of the preset angle interval, the reference horizontal position x0And is the second preset threshold.
  44. The image capturing device of any one of claims 31 to 43, wherein the processing unit is specifically configured to obtain status information of the electronic device via a sensor; and generating the first prompt message according to the state information and the first deviation information.
  45. The image capture device of claim 44, wherein the sensor comprises: at least one of a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, a gravitational acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, or a bone conduction sensor.
  46. A computer storage medium having computer program code stored therein, which when run on a processor causes the processor to perform the image capturing method according to any one of claims 1-15.
CN201880090707.2A 2018-08-31 2018-08-31 Image shooting method and device Pending CN111801932A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/103687 WO2020042188A1 (en) 2018-08-31 2018-08-31 Image capturing method and device

Publications (1)

Publication Number Publication Date
CN111801932A true CN111801932A (en) 2020-10-20

Family

ID=69643808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880090707.2A Pending CN111801932A (en) 2018-08-31 2018-08-31 Image shooting method and device

Country Status (2)

Country Link
CN (1) CN111801932A (en)
WO (1) WO2020042188A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114281066A (en) * 2020-09-17 2022-04-05 顺丰科技有限公司 Method for controlling operation of robot and related equipment
CN112527024B (en) * 2020-11-20 2023-09-19 湖北航天技术研究院总体设计所 Platform straightening system and straightening method thereof
CN114245210B (en) * 2021-09-22 2024-01-09 北京字节跳动网络技术有限公司 Video playing method, device, equipment and storage medium
CN114298254B (en) * 2021-12-27 2024-03-15 亮风台(上海)信息科技有限公司 Method and device for obtaining display parameter test information of optical device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103561211A (en) * 2013-10-25 2014-02-05 广东欧珀移动通信有限公司 Shooting angle reminding method and system for shooting terminal
US20140300758A1 (en) * 2013-04-04 2014-10-09 Bao Tran Video processing systems and methods
CN105847691A (en) * 2016-04-15 2016-08-10 乐视控股(北京)有限公司 Camera shooting device control method and device
CN107509032A (en) * 2017-09-08 2017-12-22 维沃移动通信有限公司 One kind is taken pictures reminding method and mobile terminal
CN107635095A (en) * 2017-09-20 2018-01-26 广东欧珀移动通信有限公司 Shoot method, apparatus, storage medium and the capture apparatus of photo
CN107749951A (en) * 2017-11-09 2018-03-02 睿魔智能科技(东莞)有限公司 A kind of visually-perceptible method and system for being used for unmanned photography
CN108174096A (en) * 2017-12-29 2018-06-15 广东欧珀移动通信有限公司 Method, apparatus, terminal and the storage medium of acquisition parameters setting

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4544028B2 (en) * 2005-05-13 2010-09-15 日産自動車株式会社 In-vehicle image processing apparatus and image processing method
CN101877061A (en) * 2009-04-30 2010-11-03 宫雅卓 Binoculus iris image acquiring method and device based on single camera
JP5500034B2 (en) * 2010-10-06 2014-05-21 リコーイメージング株式会社 An imaging device equipped with a camera shake correction mechanism

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140300758A1 (en) * 2013-04-04 2014-10-09 Bao Tran Video processing systems and methods
CN103561211A (en) * 2013-10-25 2014-02-05 广东欧珀移动通信有限公司 Shooting angle reminding method and system for shooting terminal
CN105847691A (en) * 2016-04-15 2016-08-10 乐视控股(北京)有限公司 Camera shooting device control method and device
CN107509032A (en) * 2017-09-08 2017-12-22 维沃移动通信有限公司 One kind is taken pictures reminding method and mobile terminal
CN107635095A (en) * 2017-09-20 2018-01-26 广东欧珀移动通信有限公司 Shoot method, apparatus, storage medium and the capture apparatus of photo
CN107749951A (en) * 2017-11-09 2018-03-02 睿魔智能科技(东莞)有限公司 A kind of visually-perceptible method and system for being used for unmanned photography
CN108174096A (en) * 2017-12-29 2018-06-15 广东欧珀移动通信有限公司 Method, apparatus, terminal and the storage medium of acquisition parameters setting

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张子夫: "[1]张子夫. 基于卷积神经网络的目标跟踪算法研究与实现", 《硕士论文》 *
肖林霞: "基于卷积神经网络的运动目标跟踪研究", 《硕士论文》 *

Also Published As

Publication number Publication date
WO2020042188A1 (en) 2020-03-05

Similar Documents

Publication Publication Date Title
CN111801932A (en) Image shooting method and device
US9036038B2 (en) Information processing apparatus and method for extracting and categorizing postures of human figures
JP6106921B2 (en) Imaging apparatus, imaging method, and imaging program
JP5450739B2 (en) Image processing apparatus and image display apparatus
JP6330036B2 (en) Image processing apparatus and image display apparatus
US20130070142A1 (en) Imaging Device and Imaging Method for Imaging Device
US20170161553A1 (en) Method and electronic device for capturing photo
JP6003135B2 (en) Image processing apparatus, image processing method, and imaging apparatus
KR20140010541A (en) Method for correcting user's gaze direction in image, machine-readable storage medium and communication terminal
US8400532B2 (en) Digital image capturing device providing photographing composition and method thereof
KR20170112763A (en) Electronic apparatus and operating method thereof
CN110677592B (en) Subject focusing method and device, computer equipment and storage medium
US20150296132A1 (en) Imaging apparatus, imaging assist method, and non-transitory recoding medium storing an imaging assist program
CN114339102B (en) Video recording method and equipment
CN113850726A (en) Image transformation method and device
JP2007208425A (en) Display method for displaying denoting identification region together with image, computer-executable program, and imaging apparatus
JP2005149370A (en) Imaging device, personal authentication device and imaging method
US20220329729A1 (en) Photographing method, storage medium and electronic device
US20200364832A1 (en) Photographing method and apparatus
CN110365910B (en) Self-photographing method and device and electronic equipment
CN115147339A (en) Human body key point detection method and related device
WO2021147650A1 (en) Photographing method and apparatus, storage medium, and electronic device
KR20100130670A (en) Apparatus and method for obtaining image using face detection in portable terminal
WO2022178724A1 (en) Image photographing method, terminal device, photographing apparatus, and storage medium
CN112672066B (en) Image processing apparatus, image capturing apparatus, control method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20220614