CN104135610B - A kind of information processing method and electronic equipment - Google Patents

A kind of information processing method and electronic equipment Download PDF

Info

Publication number
CN104135610B
CN104135610B CN201410312119.2A CN201410312119A CN104135610B CN 104135610 B CN104135610 B CN 104135610B CN 201410312119 A CN201410312119 A CN 201410312119A CN 104135610 B CN104135610 B CN 104135610B
Authority
CN
China
Prior art keywords
image
sub
result
images
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410312119.2A
Other languages
Chinese (zh)
Other versions
CN104135610A (en
Inventor
王红光
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410312119.2A priority Critical patent/CN104135610B/en
Publication of CN104135610A publication Critical patent/CN104135610A/en
Application granted granted Critical
Publication of CN104135610B publication Critical patent/CN104135610B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The invention discloses a kind of information processing method, the method is applied in an electronic equipment, and the electronic equipment includes: acquisition unit and sensing unit;The described method includes: acquiring N the first images of the first main body by the acquisition unit;The relativeness of corresponding with every first image first main body and the electronic equipment is detected by the sensing unit, obtains the first parameter sets, and first parameter sets include N number of subparameter corresponding with the N the first images;The N the first images are parsed, the first parsing result is obtained;When first parsing result meets predetermined condition, the first information is exported, the first information includes at least subparameter corresponding with the first image for meeting the predetermined condition;Meanwhile the invention also discloses a kind of electronic equipment, can determine most suitable image capture position according to the relativeness with shooting main body, provide more suitable acquisition parameter.

Description

Information processing method and electronic equipment
Technical Field
The present invention relates to an information processing technology, and in particular, to an information processing method and an electronic device.
Background
At present, when electronic devices such as a mobile terminal and a Personal Digital Assistant (PAD) are used to realize a photographing function, in order to find an optimal photographing angle, a user needs to try to photograph a corresponding photo at a plurality of different angles, and then look up the photos one by one, so as to select a satisfactory photo from the photos, and take the photographing angle adopted when the photo is photographed as the optimal photographing angle, and then perform formal photographing at the optimal photographing angle; or the user needs to try to shoot himself at a plurality of different angles by using the camera, and then view the images of himself at different positions one by one, so that the most satisfactory image is selected from the images, the shooting angle adopted when the image is shot is taken as the optimal shooting angle, and then formal shooting is carried out at the optimal shooting angle. Therefore, the current equipment cannot determine the most suitable image acquisition position according to the relative relation with the shooting subject, and provides more suitable acquisition parameters.
Disclosure of Invention
In order to solve the existing technical problem, an embodiment of the present invention provides an information processing method and an electronic device, which can determine an optimal image acquisition position according to a relative relationship with a shooting subject and provide more appropriate acquisition parameters.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides an information processing method, which is applied to an electronic device, wherein the electronic device comprises: the device comprises a collecting unit and a sensing unit; the method comprises the following steps:
acquiring N first images of a first main body through the acquisition unit;
detecting the relative relation between the first main body corresponding to each first image and the electronic equipment through the sensing unit to obtain a first parameter set, wherein the first parameter set comprises N sub-parameters corresponding to the N first images;
analyzing the N first images to obtain a first analysis result;
when the first analysis result meets a preset condition, outputting first information, wherein the first information at least comprises a sub-parameter corresponding to the first image meeting the preset condition;
wherein N is a positive integer greater than or equal to 1.
An embodiment of the present invention further provides an electronic device, where the electronic device includes: the device comprises a collecting unit and a sensing unit; the electronic device further includes:
the acquisition unit is used for acquiring N first images of the first main body;
the first obtaining unit is used for detecting the relative relation between the first main body corresponding to each first image and the electronic equipment through the sensing unit to obtain a first parameter set, and the first parameter set comprises N sub-parameters corresponding to the N first images;
the first analysis unit is used for analyzing the N first images to obtain a first analysis result; when the first analysis result meets a preset condition, triggering a first output unit;
a first output unit that outputs first information including at least a sub-parameter corresponding to the first image satisfying the predetermined condition;
wherein N is a positive integer greater than or equal to 1.
The information processing method and the electronic device provided by the embodiment of the invention are applied to the electronic device, and the electronic device comprises: the device comprises a collecting unit and a sensing unit; the method comprises the following steps: acquiring N first images of a first main body through the acquisition unit; detecting the relative relation between the first main body corresponding to each first image and the electronic equipment through the sensing unit to obtain a first parameter set, wherein the first parameter set comprises N sub-parameters corresponding to the N first images; analyzing the N first images to obtain a first analysis result; when the first analysis result meets a preset condition, outputting first information, wherein the first information at least comprises a sub-parameter corresponding to the first image meeting the preset condition; wherein N is a positive integer greater than or equal to 1; by utilizing the technical scheme of the embodiment of the invention, the most suitable image acquisition position can be determined according to the relative relation with the shooting subject, and more suitable acquisition parameters are given.
Drawings
Fig. 1 is a schematic flow chart illustrating an implementation of a first embodiment of an information processing method according to the present invention;
FIG. 2 is a schematic flow chart illustrating an implementation of a second embodiment of the information processing method according to the present invention;
FIG. 3 is a schematic flow chart illustrating an implementation of a third embodiment of an information processing method according to the present invention;
FIG. 4 is a schematic flow chart illustrating an implementation of a fourth embodiment of the information processing method according to the present invention;
fig. 5 is a schematic flow chart of an implementation of a fifth embodiment of the information processing method provided by the present invention;
fig. 6 is a schematic flow chart of an implementation of a sixth embodiment of the information processing method provided by the present invention;
fig. 7 is a schematic flow chart of an implementation of a seventh embodiment of the information processing method provided by the present invention;
fig. 8 is a schematic flow chart of an implementation of an eighth embodiment of the information processing method provided by the present invention;
FIG. 9 is a schematic structural diagram of a first embodiment of an electronic device according to the present invention;
FIG. 10 is a schematic diagram of a second embodiment of an electronic device according to the present invention;
FIG. 11 is a schematic diagram of a third embodiment of an electronic device according to the present invention;
FIG. 12 is a schematic diagram illustrating a fourth embodiment of an electronic device according to the present invention;
FIG. 13 is a schematic diagram illustrating a fifth embodiment of an electronic device according to the present invention;
fig. 14 is a schematic structural diagram of a sixth embodiment of an electronic device according to the present invention;
fig. 15 is a schematic structural diagram of a seventh embodiment of an electronic device according to the invention;
fig. 16 is a schematic structural diagram of an eighth embodiment of an electronic device according to the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings, and it should be understood that the preferred embodiments described below are only for the purpose of illustrating and explaining the present invention, and are not to be construed as limiting the present invention.
In the following embodiments of the information processing method and the electronic device provided by the present invention, the electronic device includes, but is not limited to: industrial control computers, personal computers, and the like, all types of computers, all-in-one computers, tablet computers, mobile phones, electronic readers, and the like. In the embodiment of the present invention, a preferred object of the electronic device is a mobile phone.
The first embodiment of the information processing method provided by the invention is applied to an electronic device, and the electronic device comprises: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
Fig. 1 is a schematic flow chart illustrating an implementation of a first embodiment of an information processing method according to the present invention; as shown in fig. 1, the method includes:
step 101: acquiring N first images of a first main body through the acquisition unit;
here, the first body may be a human body, a scene, or the like; taking a first main body as an example of a human body, the electronic device moves while shooting, and in the shooting track, a first image including the human body is shot every time a preset shooting angle is changed; and at N shooting angles, shooting N first images in total, or continuously shooting N first images of the human body in the motion process.
Step 102: detecting the relative relation between the first main body corresponding to each first image and the electronic equipment through the sensing unit to obtain a first parameter set, wherein the first parameter set comprises N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information at the time of capturing each first image; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, a certain number of feature points in each first image are obtained through the sensing unit, and the position change of the feature points in the first image is detected, so that the relative relationship between the first main body corresponding to each first image and the electronic device is obtained, and the first parameter set is obtained.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. By the difference in the position of the feature point in the first image, image parameter information and motion parameter information at the time of capturing the first image can be obtained.
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, image parameter information and motion parameter information employed in capturing the first image can be obtained.
Step 103: analyzing the N first images to obtain a first analysis result;
analyzing the brightness of each image of the N first images, the proportion (first proportion) of human body in the image and the proportion (second proportion) of human body five sense organs in the image to obtain an analysis result;
step 104: when the first analysis result meets a preset condition, outputting first information, wherein the first information at least comprises a sub-parameter corresponding to the first image meeting the preset condition; wherein N is a positive integer greater than or equal to 1;
here, in the N first images, a first image whose brightness can satisfy a predetermined condition may be selected only according to the brightness of the image, and a shooting angle corresponding to the first image may be output to the user; in the N first images, one first image which can meet the preset condition in proportion can be selected only according to the first proportion or the second proportion; the first image which meets the preset condition can be selected according to any two or three parameters of the image brightness, the first proportion and the second proportion, and the shooting angle corresponding to the first image is output; in the above aspect, when the shooting angle is output, at least one parameter of the first image, the luminance parameter of the first image, the first scale, and the second scale may be output.
Therefore, in the embodiment of the invention, the acquired N first images are analyzed to obtain a first analysis result; and selecting a first image meeting a preset condition from the first analysis result, and outputting a shooting angle adopted when the first image is shot. Like this, electronic equipment can be according to the relative relation with the shooting main part, confirms the most suitable image acquisition position, gives comparatively suitable collection parameter, has promoted user experience, has highlighted the variety of electronic equipment function.
A second embodiment of the information processing method provided by the present invention is applied to an electronic device, where the electronic device includes: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
FIG. 2 is a schematic flow chart illustrating an implementation of a second embodiment of the information processing method according to the present invention; as shown in fig. 2, the method includes:
step 201: acquiring N first images of a first main body through the acquisition unit;
here, the first body may be a human body, a scene, or the like; taking a first main body as an example of a human body, the electronic device moves while shooting, and in the shooting track, a first image including the human body is shot every time a preset shooting angle is changed; and at N shooting angles, shooting N first images in total, or continuously shooting N first images of the human body in the motion process.
Step 202: detecting the relative relation between the first main body corresponding to each first image and the electronic equipment through the sensing unit to obtain a first parameter set, wherein the first parameter set comprises N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information used when each first image is captured; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, a certain number of feature points in each first image are obtained through the sensing unit, and the position change of the feature points in the first image is detected, so that the relative relationship between the first main body corresponding to each first image and the electronic device is obtained, and the first parameter set is obtained.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. By the difference in the position of the feature point in the first image, image parameter information and motion parameter information at the time of capturing the first image can be obtained.
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, image parameter information and motion parameter information employed in capturing the first image can be obtained.
Step 203: acquiring a first brightness parameter of each first image in the N first images;
here, the luminance parameter of each first image is acquired; this step is used as a further description of a method for analyzing the N first images to obtain a first analysis result.
Step 204: determining the first brightness parameter closest to a predetermined condition among the N first brightness parameters; obtaining the sub-parameter corresponding to the determined first brightness parameter, and outputting the first information at least comprising the sub-parameter;
selecting a first brightness parameter closest to a preset condition from the N first brightness parameters, determining a first image with the first brightness parameter, searching a shooting angle corresponding to the first image from a first parameter set, and outputting the shooting angle; preferably, when the shooting angle is output, the first brightness parameter and/or the first image and the like can also be output; wherein N is a positive integer greater than or equal to 1.
This step may be implemented as further description of a method for outputting a first message when the first parsing result satisfies a predetermined condition, where the first message includes at least a sub-parameter corresponding to the first image satisfying the predetermined condition.
Therefore, in the embodiment of the invention, the acquired N first images are analyzed according to the brightness of each first image to obtain a first analysis result; in the first analysis result, the first image meeting the preset condition is selected, and the shooting angle adopted when the first image is shot is output, so that the electronic equipment can determine the most suitable image acquisition position according to the relative relation with the shooting main body, provide more suitable acquisition parameters, improve the user experience and highlight the diversity of the functions of the electronic equipment.
The third embodiment of the information processing method provided by the present invention is applied to an electronic device, and the electronic device includes: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
FIG. 3 is a schematic flow chart illustrating an implementation of a third embodiment of an information processing method according to the present invention; as shown in fig. 3, the method includes:
step 301: acquiring N first images of a first main body through the acquisition unit;
here, the first body may be a human body, a scene, or the like; taking a first main body as an example of a human body, the electronic device moves while shooting, and in the shooting track, a first image including the human body is shot every time a preset shooting angle is changed; and at N shooting angles, shooting N first images in total, or continuously shooting N first images of the human body in the motion process.
Step 302: detecting the relative relation between the first main body corresponding to each first image and the electronic equipment through the sensing unit to obtain a first parameter set, wherein the first parameter set comprises N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information used when each first image is captured; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, a certain number of feature points in each first image are obtained through the sensing unit, and the position change of the feature points in the first image is detected, so that the relative relationship between the first main body corresponding to each first image and the electronic device is obtained, and the first parameter set is obtained.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. By the difference in the position of the feature point in the first image, image parameter information and motion parameter information at the time of capturing the first image can be obtained.
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, image parameter information and motion parameter information employed in capturing the first image can be obtained.
Step 303: acquiring a first target object in each first image from the N first images; calculating a first proportion value of each first target object in the corresponding first image;
here, the first target object may be a human body, or may be a scene in a shooting environment, such as a pendulum; taking the first target object as an example of a human body, a ratio value of the human body to the image of each first image is calculated.
This step is used as a further description of a method for analyzing the N first images to obtain a first analysis result.
Step 304: determining a first proportional value closest to a first predetermined value; obtaining a sub-parameter corresponding to the determined first scale value, and outputting the first information at least including the sub-parameter.
Selecting a first proportion value closest to a first preset value from the N first proportion values, determining a first image with the first proportion value, searching a shooting angle corresponding to the first image in a first parameter set, and outputting the shooting angle; preferably, when the shooting angle is output, the first brightness parameter and/or the first image and the like can also be output; wherein N is a positive integer greater than or equal to 1.
This step may be implemented as further description of a method for outputting a first message when the first parsing result satisfies a predetermined condition, where the first message includes at least a sub-parameter corresponding to the first image satisfying the predetermined condition.
Therefore, in the embodiment of the invention, the acquired N first images are analyzed according to the first proportion value of each first target object in the first image, so as to obtain a first analysis result; in the first analysis result, a first proportion value which meets a preset condition, such as the first proportion value closest to a first preset value, is selected, a first image corresponding to the first proportion value is determined, and a shooting angle adopted when the first image is shot is output, so that the electronic equipment can determine the most suitable image collecting position according to the relative relation with a shooting main body, provide more suitable collecting parameters, improve user experience and highlight the diversity of functions of the electronic equipment.
The fourth embodiment of the information processing method provided by the invention is applied to an electronic device, and the electronic device comprises: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
FIG. 4 is a schematic flow chart illustrating an implementation of a fourth embodiment of the information processing method according to the present invention; as shown in fig. 4, the method includes:
step 401: acquiring N first images of a first main body through the acquisition unit;
here, the first body may be a human body, a scene, or the like; taking a first main body as an example of a human body, the electronic device moves while shooting, and in the shooting track, a first image including the human body is shot every time a preset shooting angle is changed; and at N shooting angles, shooting N first images in total, or continuously shooting N first images of the human body in the motion process.
Step 402: detecting the relative relation between the first main body corresponding to each first image and the electronic equipment through the sensing unit to obtain a first parameter set, wherein the first parameter set comprises N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information used when each first image is captured; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, a certain number of feature points in each first image are obtained through the sensing unit, and the position change of the feature points in the first image is detected, so that the relative relationship between the first main body corresponding to each first image and the electronic device is obtained, and the first parameter set is obtained.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. By the difference in the position of the feature point in the first image, image parameter information and motion parameter information at the time of capturing the first image can be obtained.
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, image parameter information and motion parameter information employed in capturing the first image can be obtained.
Step 403: acquiring a first target object in each first image from the N first images; acquiring at least three target points of each of the N first target objects; generating corresponding first sub-images according to the acquired at least three target points of each first target object; calculating a second proportion value of each first sub-image in the corresponding first image;
here, the first target object may be a human body, or may be a scene in a shooting environment, such as a pendulum; taking the first target object as a human body and the 5 th first image as an example, taking the ears, eyes, nose, mouth, eyebrows and other five sense organs of the human body in the 5 th first image as target points respectively, and selecting three target points such as the nose, mouth and eyebrows from the five target points; and connecting every two target points of the three selected target points to obtain a first sub-image, calculating a proportion value of the first sub-image in the 5 th first image, and regarding the proportion value as a second proportion value.
This step is used as a further description of a method for analyzing the N first images to obtain a first analysis result.
Step 404: determining a second proportional value closest to a second predetermined value; obtaining a sub-parameter corresponding to the determined second proportional value, and outputting the first information at least comprising the sub-parameter;
here, in the calculated N second proportional values, selecting a second proportional value closest to a second predetermined value, determining a first image having the second proportional value, searching for a shooting angle corresponding to the first image in a first parameter set, and outputting the shooting angle; preferably, when the shooting angle is output, the first brightness parameter and/or the first image and the like can also be output; wherein N is a positive integer greater than or equal to 1. This step may be implemented as further description of a method for outputting a first message when the first parsing result satisfies a predetermined condition, where the first message includes at least a sub-parameter corresponding to the first image satisfying the predetermined condition.
Therefore, in the embodiment of the invention, the acquired N first images are analyzed according to the second proportion value of each first sub-image in the corresponding first image, so as to obtain a first analysis result; in the first analysis result, a second proportional value which meets the preset condition, such as the second proportional value closest to a second preset value is selected, the first image corresponding to the second proportional value is determined, and the shooting angle adopted when the first image is shot is output, so that the electronic equipment can determine the most suitable image acquisition position according to the relative relation with the shooting main body, provide more suitable acquisition parameters, improve the user experience and highlight the diversity of the functions of the electronic equipment.
A fifth embodiment of the information processing method provided by the present invention is applied to an electronic device, where the electronic device includes: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
Fig. 5 is a schematic flow chart of an implementation of a fifth embodiment of the information processing method provided by the present invention; as shown in fig. 5, the method includes:
step 501: acquiring N first images of a first main body through the acquisition unit;
here, the first body may be a human body, a scene, or the like; taking a first main body as an example of a human body, the electronic device moves while shooting, and in the shooting track, a first image including the human body is shot every time a preset shooting angle is changed; and at N shooting angles, shooting N first images in total, or continuously shooting N first images of the human body in the motion process.
Step 502: detecting the relative relation between the first main body corresponding to each first image and the electronic equipment through the sensing unit to obtain a first parameter set, wherein the first parameter set comprises N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information used when each first image is captured; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, a certain number of feature points in each first image are obtained through the sensing unit, and the position change of the feature points in the first image is detected, so that the relative relationship between the first main body corresponding to each first image and the electronic device is obtained, and the first parameter set is obtained.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. By the difference in the position of the feature point in the first image, image parameter information and motion parameter information at the time of capturing the first image can be obtained.
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, image parameter information and motion parameter information employed in capturing the first image can be obtained.
Step 503: acquiring a first brightness parameter of each first image in the N first images; acquiring a first target object in each first image; calculating a first proportion value of each first target object in the corresponding first image; acquiring a first weight and a second weight distributed for each first image; performing first operation on the obtained N first brightness parameters and the corresponding first weight values to obtain a first sub-result of the corresponding first image; performing first operation on the obtained N first proportional values and the corresponding second weight values to obtain a second sub-result corresponding to the first image; performing second operation on the first sub-result and the second sub-result of each first image to obtain a first operation result of the first image;
here, the first target object may be a human body, or may be a scene in a shooting environment, such as a pendulum; taking a first target object as a human body and a 5 th first image as an example, acquiring the brightness value of the 5 th first image, calculating the proportion value of the human body in the 5 th first image in the image, and regarding the proportion value as a first proportion value; multiplying the first weight value distributed to the image by the brightness parameter to obtain a first sub-result of the 5 th first image; multiplying the second weight value distributed to the image by the first proportional value to obtain a second sub-result of the 5 th first image; and adding the first sub-result and the second sub-result of the 5 th first image to obtain a first operation result, and repeating the steps until the N first images obtain N first operation results. The first weight and the second weight are preset and can be flexibly set according to the actual application condition.
This step is used as a further description of a method for analyzing the N first images to obtain a first analysis result.
Step 504: determining a first operation result closest to a first preset result in the obtained N first operation results; obtaining a sub-parameter corresponding to the determined first operation result, and outputting the first information at least comprising the sub-parameter;
selecting a first operation result closest to a first preset result from the N first operation results, determining a first image with the first operation result, searching a shooting angle corresponding to the first image in a first parameter set, and outputting the shooting angle; preferably, when the shooting angle is output, the first brightness parameter and/or the first image and the like can also be output; wherein N is a positive integer greater than or equal to 1.
This step may be implemented as further description of a method for outputting a first message when the first parsing result satisfies a predetermined condition, where the first message includes at least a sub-parameter corresponding to the first image satisfying the predetermined condition.
Therefore, in the embodiment of the invention, the acquired N first images are analyzed according to the brightness of each first image and the first proportion value of the first target object in the first image, so as to obtain N first operation results; in the N first operation results, the first operation result which meets the preset conditions, such as the first operation result closest to the first preset result is selected, the first image corresponding to the first operation result is determined, and the shooting angle adopted when the first image is shot is output, so that the electronic equipment can determine the most suitable image collecting position according to the relative relation with the shooting main body, provide more suitable collecting parameters, improve the user experience and highlight the diversity of the functions of the electronic equipment.
A sixth embodiment of the information processing method provided by the present invention is applied to an electronic device, where the electronic device includes: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
Fig. 6 is a schematic flow chart of an implementation of a seventh embodiment of the information processing method provided by the present invention; as shown in fig. 6, the method includes:
step 601: acquiring N first images of a first main body through the acquisition unit;
here, the first body may be a human body, a scene, or the like; taking a first main body as an example of a human body, the electronic device moves while shooting, and in the shooting track, a first image including the human body is shot every time a preset shooting angle is changed; and at N shooting angles, shooting N first images in total, or continuously shooting N first images of the human body in the motion process.
Step 602: detecting the relative relation between the first main body corresponding to each first image and the electronic equipment through the sensing unit to obtain a first parameter set, wherein the first parameter set comprises N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information, shooting position, and the like used when each first image is shot; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, a certain number of feature points in each first image are obtained through the sensing unit, and the position change of the feature points in the first image is detected, so that the relative relationship between the first main body corresponding to each first image and the electronic device is obtained, and the first parameter set is obtained.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. By the difference of the positions of the characteristic points in the first image, the image parameter information and the motion parameter information when the first image is shot can be obtained
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, image parameter information and motion parameter information employed in capturing the first image can be obtained.
Step 603: acquiring a first brightness parameter of each first image in the N first images; acquiring a first target object in each first image; acquiring at least three target points of each of the N first target objects; generating corresponding first sub-images according to the acquired at least three target points of each first target object; calculating a second proportion value of each first sub-image in the corresponding first image; acquiring a first weight and a third weight distributed for each first image; performing first operation on the obtained N first brightness parameters and the corresponding first weight values to obtain a first sub-result of the corresponding first image; performing first operation on the obtained N second proportional values and the corresponding third weight values to obtain a third sub-result of the corresponding first image; performing second operation on the first sub-result and the third sub-result of each first image to obtain a second operation result of the first image;
here, the first target object may be a human body, or may be a scene in a shooting environment, such as a pendulum; taking a first target object as a human body and a 5 th first image as an example, acquiring brightness parameters of the 5 th first image, taking five sense organs of the human body in the 5 th first image, such as ears, eyes, nose, mouth, eyebrows, and the like as target points, and selecting three target points of the five target points, such as nose, mouth, eyebrows; connecting two target points of the three selected target points to obtain a first sub-image, calculating a proportion value of the first sub-image in the 5 th first image, and taking the proportion value as a second proportion value; multiplying the first weight value distributed to the image by the brightness parameter to obtain a first sub-result of the 5 th first image; multiplying a third weight value distributed for the image by the second proportional value to obtain a third sub-result; and adding the first sub-result and the third sub-result to obtain a second operation result, and repeating the steps to obtain N second operation results in the total number of the N first images. The first weight and the third weight are preset and can be flexibly set according to the actual application condition.
This step is used as a further description of a method for analyzing the N first images to obtain a first analysis result.
Step 604: determining a second operation result closest to a second preset result in the obtained N second operation results; obtaining a sub-parameter corresponding to the determined second operation result, and outputting the first information at least including the sub-parameter.
Selecting a second operation result closest to a second preset result from the N second operation results, determining a first image with the second operation result, searching a shooting angle corresponding to the first image in a first parameter set, and outputting the shooting angle; preferably, when the shooting angle is output, the first brightness parameter and/or the first image and the like can also be output; wherein N is a positive integer greater than or equal to 1.
This step may be implemented as further description of a method for outputting a first message when the first parsing result satisfies a predetermined condition, where the first message includes at least a sub-parameter corresponding to the first image satisfying the predetermined condition.
Therefore, in the embodiment of the invention, the acquired N first images are analyzed according to the brightness of each first image and the second proportion value of the first sub-image in the first image, so as to obtain N second operation results; in the N second operation results, the second operation result which meets the preset conditions, such as the second operation result which is closest to the second preset result is selected, the first image corresponding to the second operation result is determined, and the shooting angle adopted when the first image is shot is output, so that the electronic equipment can determine the most suitable image collecting position according to the relative relation with the shooting main body, provide more suitable collecting parameters, improve the user experience and highlight the diversity of the functions of the electronic equipment.
The seventh embodiment of the information processing method provided by the present invention is applied to an electronic device, and the electronic device includes: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
Fig. 7 is a schematic flow chart of an implementation of an eighth embodiment of the information processing method provided by the present invention; as shown in fig. 7, the method includes:
step 701: acquiring N first images of a first main body through the acquisition unit;
here, the first body may be a human body, a scene, or the like; taking a first main body as an example of a human body, the electronic device moves while shooting, and in the shooting track, a first image including the human body is shot every time a preset shooting angle is changed; and at N shooting angles, shooting N first images in total, or continuously shooting N first images of the human body in the motion process.
Step 702: detecting the relative relation between the first main body corresponding to each first image and the electronic equipment through the sensing unit to obtain a first parameter set, wherein the first parameter set comprises N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information used when each first image is captured; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, a certain number of feature points in each first image are obtained through the sensing unit, and the position change of the feature points in the first image is detected, so that the relative relationship between the first main body corresponding to each first image and the electronic device is obtained, and the first parameter set is obtained.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. By the difference in the position of the feature point in the first image, image parameter information and motion parameter information at the time of capturing the first image can be obtained.
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, image parameter information and motion parameter information employed in capturing the first image can be obtained.
Step 703: acquiring a first target object in each first image from the N first images; calculating a first proportion value of each first target object in the corresponding first image; acquiring at least three target points of each of the N first target objects; generating corresponding first sub-images according to the acquired at least three target points of each first target object; calculating a second proportion value of each first sub-image in the corresponding first image; acquiring a second weight and a third weight distributed for each first image; performing first operation on the obtained N first proportional values and the corresponding second weight values to obtain a second sub-result corresponding to the first image; performing first operation on the obtained N second proportional values and the corresponding third weight values to obtain a third sub-result of the corresponding first image; performing second operation on the second sub-result and the third sub-result of each first image to obtain a third operation result of the first image;
here, the first target object may be a human body, or may be a scene in a shooting environment, such as a pendulum; taking a first target object as a human body and a 5 th first image as an example, acquiring a proportion value of the human body in the 5 th first image in the image, and taking the proportion value as a first proportion value; taking the ears, eyes, nose, mouth, eyebrows and other five sense organs of the human body in the 5 th first image as target points respectively, and selecting three target points such as the nose, the mouth and the eyebrows from the five target points; connecting two target points of the three selected target points to obtain a first sub-image, calculating a proportion value of the first sub-image in the 5 th first image, and taking the proportion value as a second proportion value; multiplying the second weight value distributed to the image by the first proportional value to obtain a second sub-result of the 5 th first image; multiplying a third weight value distributed for the image by the second proportional value to obtain a third sub-result; and adding the second sub-result and the third sub-result to obtain a third operation result, and repeating the steps until N first images obtain N second operation results. The second weight and the third weight are preset and can be flexibly set according to the actual application condition.
This step is used as a further description of a method for analyzing the N first images to obtain a first analysis result.
Step 704: determining a third operation result closest to a third preset result in the obtained N third operation results; obtaining a sub-parameter corresponding to the determined third operation result, and outputting the first information at least comprising the sub-parameter;
selecting a third operation result closest to a third preset result from the N third operation results, determining a first image with the third operation result, searching a shooting angle corresponding to the first image in a first parameter set, and outputting the shooting angle; preferably, when the shooting angle is output, the first brightness parameter and/or the first image and the like can also be output; wherein N is a positive integer greater than or equal to 1.
This step may be implemented as further description of a method for outputting a first message when the first parsing result satisfies a predetermined condition, where the first message includes at least a sub-parameter corresponding to the first image satisfying the predetermined condition.
Therefore, in the embodiment of the present invention, the N acquired first images are analyzed according to the first ratio of the first target object in the first image and the second ratio of the first sub-image in the first image, so as to obtain N third operation results; and selecting a third operation result which meets the preset condition, such as the third operation result closest to the third preset result, from the N third operation results, determining the first image corresponding to the third operation result, and outputting the shooting angle adopted when the first image is shot, so that the electronic equipment can determine the most suitable image acquisition position according to the relative relation with the shooting main body, provide more suitable acquisition parameters, improve the user experience and highlight the diversity of the functions of the electronic equipment.
An eighth embodiment of the information processing method provided by the present invention is applied to an electronic device, where the electronic device includes: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
Fig. 8 is a schematic flow chart of an implementation of an eighth embodiment of the information processing method provided by the present invention; as shown in fig. 8, the method includes:
step 801: acquiring N first images of a first main body through the acquisition unit;
here, the first body may be a human body, a scene, or the like; taking a first main body as an example of a human body, the electronic device moves while shooting, and in the shooting track, a first image including the human body is shot every time a preset shooting angle is changed; and at N shooting angles, shooting N first images in total, or continuously shooting N first images of the human body in the motion process.
Step 802: detecting the relative relation between the first main body corresponding to each first image and the electronic equipment through the sensing unit to obtain a first parameter set, wherein the first parameter set comprises N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information used when each first image is captured; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, a certain number of feature points in each first image are obtained through the sensing unit, and the position change of the feature points in the first image is detected, so that the relative relationship between the first main body corresponding to each first image and the electronic device is obtained, and the first parameter set is obtained.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. By the difference in the position of the feature point in the first image, image parameter information and motion parameter information at the time of capturing the first image can be obtained.
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, image parameter information and motion parameter information employed in capturing the first image can be obtained.
Step 803: acquiring a first brightness parameter of the first image from the N first images; acquiring a first target object in each first image; calculating a first proportion value of each first target object in the corresponding first image; acquiring at least three target points of each of the N first target objects; generating corresponding first sub-images according to the acquired at least three target points of each first target object; calculating a second proportion value of each first sub-image in the corresponding first image; acquiring a first weight, a second weight and a third weight distributed for each first image; performing first operation on the obtained N first brightness parameters and the corresponding first weight values to obtain a first sub-result of the corresponding first image; performing first operation on the obtained N first proportional values and the corresponding second weight values to obtain a second sub-result corresponding to the first image; performing first operation on the obtained N second proportional values and the corresponding third weight values to obtain a third sub-result of the corresponding first image; performing second operation on the first sub-result, the second sub-result and the third sub-result of each first image to obtain a fourth operation result of the first image;
here, the first target object may be a human body, or may be a scene in a shooting environment, such as a pendulum; taking a first target object as a human body and a 5 th first image as an example, acquiring a brightness parameter of the 5 th first image, acquiring a proportion value of the human body in the 5 th first image in the image, and regarding the proportion value as a first proportion value; taking the ears, eyes, nose, mouth, eyebrows and other five sense organs of the human body in the 5 th first image as target points respectively, and selecting three target points such as the nose, the mouth and the eyebrows from the five target points; connecting two target points of the three selected target points to obtain a first sub-image, calculating a proportion value of the first sub-image in the 5 th first image, and taking the proportion value as a second proportion value; multiplying the first weight value distributed to the image by the brightness parameter to obtain a first sub-result of the 5 th first image; multiplying the second weight value distributed to the image by the first proportional value to obtain a second sub-result of the 5 th first image; multiplying a third weight value distributed for the image by the second proportional value to obtain a third sub-result; and adding the first sub-result, the second sub-result and the third sub-result to obtain a fourth operation result, and repeating the steps until N first images obtain N fourth operation results. The first weight, the second weight and the third weight are preset and can be flexibly set according to the actual application condition.
This step is used as a further description of a method for analyzing the N first images to obtain a first analysis result.
Step 804: determining a fourth operation result closest to a fourth predetermined result from the N fourth operation results; obtaining a sub-parameter corresponding to the determined fourth operation result, and outputting the first information at least comprising the sub-parameter;
selecting a fourth operation result closest to a fourth preset result from the N fourth operation results, determining a first image with the fourth operation result, searching a shooting angle corresponding to the first image in a first parameter set, and outputting the shooting angle; preferably, when the shooting angle is output, the first brightness parameter and/or the first image and the like can also be output; wherein N is a positive integer greater than or equal to 1.
This step may be implemented as further description of a method for outputting a first message when the first parsing result satisfies a predetermined condition, where the first message includes at least a sub-parameter corresponding to the first image satisfying the predetermined condition.
Therefore, in the embodiment of the present invention, the N acquired first images are analyzed according to the brightness parameter of the first image, the first ratio of the first target object to the first image, and the second ratio of the first sub-image to the first image, so as to obtain N fourth operation results; and selecting a fourth operation result which meets the preset condition, such as selecting the fourth operation result closest to the fourth preset result, determining the first image corresponding to the fourth operation result, and outputting the shooting angle adopted when the first image is shot, so that the electronic equipment can determine the most suitable image acquisition position according to the relative relation with the shooting main body, provide more suitable acquisition parameters, improve the user experience and highlight the diversity of the functions of the electronic equipment.
The present invention provides a first embodiment of an electronic device, comprising: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
FIG. 9 is a schematic structural diagram of a first embodiment of an electronic device according to the present invention; as shown in fig. 9, the electronic device further includes:
the acquisition unit 11 is used for acquiring N first images of the first main body;
here, the first body may be a human body, a scene, or the like; taking a first main body as an example of a human body, the electronic device moves while shooting, and in the shooting track, each time a preset shooting angle is changed, the acquisition unit 11 shoots a first image including the human body; at the N shooting angles, the acquisition unit 11 shoots N first images together, or the acquisition unit 11 continuously shoots N first images of the human body in the motion process.
A first obtaining unit 12, configured to obtain a first parameter set by detecting, by the sensing unit, a relative relationship between the first main body and the electronic device, where the relative relationship corresponds to each first image, where the first parameter set includes N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information used when each first image is captured; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, the first obtaining unit 12 obtains a certain number of feature points in each first image through the sensing unit, detects position changes of the feature points in the first image, obtains a relative relationship between the first main body corresponding to each first image and the electronic device, and obtains the first parameter set.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. The first obtaining unit 12 can obtain the image parameter information and the motion parameter information at the time of capturing the first image by the difference in the position of the feature point in the first image.
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, the first obtaining unit 12 can obtain the image parameter information and the motion parameter information employed when the first image is captured.
A first analyzing unit 13, configured to analyze the N first images to obtain a first analysis result; when the first analysis result meets a preset condition, triggering a first output unit 14;
a first output unit 14 for outputting first information including at least a sub-parameter corresponding to the first image satisfying the predetermined condition; wherein N is a positive integer greater than or equal to 1.
Here, the first analyzing unit 13 analyzes the brightness of each of the N first images, the ratio of the human body to the image (first ratio), and the ratio of the human body five sense organs to the image (second ratio), and obtains an analysis result; in the N first images, the first analysis unit 13 may select one first image that can satisfy the predetermined condition in terms of brightness only according to the brightness of the image, and trigger the first output unit 14 to output a shooting angle corresponding to the first image to the user; in the N first images, the first analyzing unit 13 may select one first image that can satisfy the predetermined condition in proportion only according to the first proportion or the second proportion; the first analysis unit 13 may also select a first image meeting a predetermined condition according to any two or a combination of three parameters of the image brightness, the first ratio, and the second ratio, and trigger the first output unit 14 to output a shooting angle corresponding to the first image; in the above-described aspect, when the first output unit 14 outputs the shooting angle, at least one of the first image, the luminance parameter of the first image, the first scale, and the second scale may be output.
Therefore, in the embodiment of the invention, the acquired N first images are analyzed to obtain a first analysis result; in the first analysis result, the first image meeting the preset condition is selected, and the shooting angle adopted when the first image is shot is output, so that the electronic equipment can determine the most suitable image acquisition position according to the relative relation with the shooting main body, provide more suitable acquisition parameters, improve the user experience and highlight the diversity of the functions of the electronic equipment.
The present invention provides a second embodiment of an electronic device, comprising: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
FIG. 10 is a schematic diagram of a second embodiment of an electronic device according to the present invention; as shown in fig. 10, the electronic device further includes:
an acquisition unit 21 configured to acquire N first images of a first subject;
here, the first body may be a human body, a scene, or the like; taking a first main body as an example of a human body, the electronic device moves while shooting, and in the shooting track, each time a preset shooting angle is changed, the acquisition unit 21 shoots a first image including the human body; at the N shooting angles, the acquisition unit 21 shoots N first images together, or the acquisition unit 21 continuously shoots N first images of the human body during the movement.
A first obtaining unit 22, configured to obtain a first parameter set by detecting, by the sensing unit, a relative relationship between the first main body and the electronic device, where the relative relationship corresponds to each first image, where the first parameter set includes N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information used when each first image is captured; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, the first obtaining unit 22 obtains a certain number of feature points in each first image through the sensing unit, detects position changes of the feature points in the first image, obtains a relative relationship between the first main body corresponding to each first image and the electronic device, and obtains the first parameter set.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. The first obtaining unit 22 can obtain the image parameter information and the motion parameter information at the time of capturing the first image by the difference in the position of the feature point in the first image.
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, the first obtaining unit 22 can obtain the image parameter information and the motion parameter information employed when the first image is captured.
The first analyzing unit 23 is configured to obtain a first brightness parameter of each of the N first images; determining the first brightness parameter closest to a predetermined condition among the N first brightness parameters; acquiring the sub-parameter corresponding to the determined first brightness parameter, and triggering the first output unit 24;
a first output unit 24 for outputting first information including at least a sub-parameter corresponding to the first image satisfying the predetermined condition; wherein N is a positive integer greater than or equal to 1;
in the above scheme, among the N first brightness parameters, the first parsing unit 23 selects the first brightness parameter closest to the predetermined condition, determines the first image with the first brightness parameter, searches for the shooting angle corresponding to the first image in the first parameter set, and triggers the first output unit 24 to output the shooting angle; preferably, when the first output unit 24 outputs the photographing angle, the first brightness parameter and/or the first image and the like may also be output;
the function realized by the first analyzing unit 23 may be that the first analyzing unit 23 analyzes the N first images to obtain a first analysis result; when the first parsing result satisfies a predetermined condition, further description of the first output unit 24 is triggered.
Therefore, in the embodiment of the invention, the acquired N first images are analyzed according to the brightness of each first image to obtain a first analysis result; in the first analysis result, the first image which accords with the preset brightness parameter or the brightness range is selected, and the shooting angle adopted when the first image is shot is output, so that the electronic equipment can determine the most suitable image collecting position according to the relative relation with the shooting main body, provide more suitable collecting parameters, improve the user experience and highlight the diversity of the functions of the electronic equipment.
The present invention provides a third embodiment of an electronic device, comprising: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
FIG. 11 is a schematic diagram of a third embodiment of an electronic device according to the present invention; as shown in fig. 11, the electronic device further includes:
an acquisition unit 31 for acquiring N first images of the first subject;
here, the first body may be a human body, a scene, or the like; taking a first subject as an example of a human body, the electronic device moves while shooting, and in the shooting track, the acquisition unit 31 shoots a first image including the human body every time a preset shooting angle is changed; at the N shooting angles, the acquisition unit 31 shoots N first images together, or the acquisition unit 31 continuously shoots N first images of the human body during the movement.
A first obtaining unit 32, configured to obtain a first parameter set by detecting, by the sensing unit, a relative relationship between the first main body and the electronic device, where the relative relationship corresponds to each first image, where the first parameter set includes N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information used when each first image is captured; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, the first obtaining unit 32 obtains a certain number of feature points in each first image through the sensing unit, detects position changes of the feature points in the first image, obtains a relative relationship between the first main body corresponding to each first image and the electronic device, and obtains the first parameter set.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. The first obtaining unit 32 can obtain the image parameter information and the motion parameter information at the time of capturing the first image by the difference in the position of the feature point in the first image.
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, the first obtaining unit 32 can obtain the image parameter information and the motion parameter information employed when the first image is captured.
A first analyzing unit 33, configured to obtain, in the N first images, a first target object in each first image; calculating a first proportion value of each first target object in the corresponding first image; determining a first proportional value closest to a first predetermined value; acquiring a sub-parameter corresponding to the determined first proportional value, and triggering the first output unit 34;
a first output unit 34 for outputting first information including at least a sub-parameter corresponding to the first image satisfying the predetermined condition; wherein N is a positive integer greater than or equal to 1;
in the above scheme, the first target object may be a human body, or may be a scene in a shooting environment, such as a pendulum; taking the first target object as an example of a human body, the first analyzing unit 33 calculates a ratio of the human body to the image of each first image, and considers the ratio as a first ratio; among the N first scale values, the first parsing unit 33 selects a first scale value closest to a first predetermined value, determines a first image having the first scale value, searches for a shooting angle corresponding to the first image in a first parameter set, and triggers the first output unit 34 to output the shooting angle; preferably, when the shooting angle is output, the first brightness parameter and/or the first image and the like can also be output;
the function realized by the first analyzing unit 33 may be that the first analyzing unit 33 analyzes the N first images to obtain a first analysis result; when the first parsing result satisfies a predetermined condition, further description of the first output unit 34 is triggered.
Therefore, in the embodiment of the invention, the acquired N first images are analyzed according to the first proportion value of each first target object in the first image, so as to obtain a first analysis result; in the first analysis result, a first proportion value which meets a preset condition, such as the first proportion value closest to a first preset value, is selected, a first image corresponding to the first proportion value is determined, and a shooting angle adopted when the first image is shot is output, so that the electronic equipment can determine the most suitable image collecting position according to the relative relation with a shooting main body, provide more suitable collecting parameters, improve user experience and highlight the diversity of functions of the electronic equipment.
The present invention provides a fourth embodiment of an electronic device, comprising: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
FIG. 12 is a schematic diagram illustrating a fourth embodiment of an electronic device according to the present invention; as shown in fig. 12, the electronic device further includes:
an acquisition unit 41 for acquiring N first images of the first subject;
here, the first body may be a human body, a scene, or the like; taking a first subject as an example of a human body, the electronic device moves while shooting, and in the shooting track, each time a preset shooting angle is changed, the acquisition unit 41 shoots a first image including the human body; at the N shooting angles, the acquisition unit 41 shoots N first images together, or the acquisition unit 41 continuously shoots N first images of the human body during the movement.
A first obtaining unit 42, configured to obtain a first parameter set by detecting, by the sensing unit, a relative relationship between the first main body and the electronic device, where the relative relationship corresponds to each first image, where the first parameter set includes N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information used when each first image is captured; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, the first obtaining unit 42 obtains a certain number of feature points in each first image through the sensing unit, detects position changes of the feature points in the first image, obtains a relative relationship between the first main body corresponding to each first image and the electronic device, and obtains the first parameter set.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. The first obtaining unit 42 may obtain the image parameter information and the motion parameter information at the time of capturing the first image, by the difference in the position of the feature point in the first image.
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, the first obtaining unit 42 can obtain the image parameter information and the motion parameter information employed when the first image is captured.
A first analyzing unit 43, configured to obtain, in the N first images, a first target object in each first image; acquiring at least three target points of each of the N first target objects; generating corresponding first sub-images according to the acquired at least three target points of each first target object; calculating a second proportion value of each first sub-image in the corresponding first image; determining a second proportional value closest to a second predetermined value; acquiring a sub-parameter corresponding to the determined second proportional value, and triggering the first output unit 44;
a first output unit 44 for outputting first information including at least a sub-parameter corresponding to the first image satisfying the predetermined condition; wherein N is a positive integer greater than or equal to 1;
in the above scheme, the first target object may be a human body, or may be a scene in a shooting environment, such as a pendulum; taking the first target object as a human body and the 5 th first image as an example, the first analyzing unit 43 takes the five sense organs of the human body in the 5 th first image, such as ears, eyes, nose, mouth, eyebrows, etc., as one target point respectively, and selects three target points, such as nose, mouth, eyebrows, among the five target points; two target points of the three selected target points are connected to obtain a first sub-image, and the first analysis unit 43 calculates a ratio of the first sub-image to the 5 th first image, and regards the ratio as a second ratio.
In the calculated N second proportional values, the first analyzing unit 43 selects the second proportional value closest to the second predetermined value, determines the first image having the second proportional value, searches for the shooting angle corresponding to the first image in the first parameter set, and triggers the first output unit 44 to output the shooting angle; preferably, when the shooting angle is output, the first brightness parameter and/or the first image and the like can also be output;
the function realized by the first analyzing unit 43 may be that the first analyzing unit 43 analyzes the N first images to obtain a first analysis result; when the first parsing result satisfies a predetermined condition, further description of the first output unit 44 is triggered.
Therefore, in the embodiment of the invention, the acquired N first images are analyzed according to the second proportion value of each first sub-image in the corresponding first image, so as to obtain a first analysis result; in the first analysis result, a second proportional value which meets the preset condition, such as the second proportional value closest to a second preset value is selected, the first image corresponding to the second proportional value is determined, and the shooting angle adopted when the first image is shot is output, so that the electronic equipment can determine the most suitable image acquisition position according to the relative relation with the shooting main body, provide more suitable acquisition parameters, improve the user experience and highlight the diversity of the functions of the electronic equipment.
The present invention provides a fifth embodiment of an electronic device, comprising: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
FIG. 13 is a schematic diagram illustrating a fifth embodiment of an electronic device according to the present invention; as shown in fig. 13, the electronic device further includes:
an acquisition unit 51 for acquiring N first images of the first subject;
here, the first body may be a human body, a scene, or the like; taking a first subject as an example of a human body, the electronic device moves while shooting, and in the shooting track, the acquisition unit 51 shoots a first image including the human body every time a preset shooting angle is changed; at the N shooting angles, the acquisition unit 51 shoots N first images together, or the acquisition unit 51 continuously shoots N first images of the human body during the movement.
A first obtaining unit 52, configured to obtain a first parameter set by detecting, by the sensing unit, a relative relationship between the first main body and the electronic device, where the relative relationship corresponds to each first image, where the first parameter set includes N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information used when each first image is captured; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, the first obtaining unit 52 obtains a certain number of feature points in each first image through the sensing unit, detects position changes of the feature points in the first image, obtains a relative relationship between the first main body corresponding to each first image and the electronic device, and obtains the first parameter set.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. The first obtaining unit 52 may obtain the image parameter information and the motion parameter information at the time of capturing the first image, by the difference in the position of the feature point in the first image.
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, the first obtaining unit 52 can obtain the image parameter information and the motion parameter information employed when the first image is captured.
A first analyzing unit 53, configured to obtain a first brightness parameter of each of the N first images; acquiring a first target object in each first image; calculating a first proportion value of each first target object in the corresponding first image; acquiring a first weight and a second weight distributed for each first image; performing first operation on the obtained N first brightness parameters and the corresponding first weight values to obtain a first sub-result of the corresponding first image; performing first operation on the obtained N first proportional values and the corresponding second weight values to obtain a second sub-result corresponding to the first image; performing second operation on the first sub-result and the second sub-result of each first image to obtain a first operation result of the first image; determining a first operation result closest to a first preset result in the obtained N first operation results; acquiring a sub-parameter corresponding to the determined first operation result, and triggering the first output unit 54;
a first output unit 54 for outputting first information including at least a sub-parameter corresponding to the first image satisfying the predetermined condition; wherein N is a positive integer greater than or equal to 1;
in the above solution, here, the first target object may be a human body, or may be a scene in a shooting environment, such as a pendulum; taking the first target object as a human body and the 5 th first image as an example, the first analyzing unit 53 obtains the brightness parameter of the 5 th first image, calculates a ratio of the human body in the 5 th first image to the image, and considers the ratio as a first ratio; the first analyzing unit 53 multiplies the first weight value allocated to the image by the brightness parameter to obtain a first sub-result of the 5 th first image; multiplying the second weight value distributed to the image by the first proportional value to obtain a second sub-result of the 5 th first image; the first parsing unit 53 adds the first sub-result and the second sub-result of the 5 th first image to obtain a first operation result, and so on, so that N first images obtain N first operation results. The first weight and the second weight are preset and can be flexibly set according to the actual application condition.
Here, among the N first operation results, the first parsing unit 53 selects the first operation result closest to the first predetermined result, determines the first image with the first operation result, searches for the shooting angle corresponding to the first image in the first parameter set, and triggers the first output unit 54 to output the shooting angle; preferably, the first brightness parameter and/or the first image may be output when the photographing angle is output.
The function realized by the first analyzing unit 53 may be that the first analyzing unit 53 analyzes the N first images to obtain a first analysis result; when the first parsing result satisfies a predetermined condition, further description of the first output unit 54 is triggered.
Therefore, in the embodiment of the invention, the acquired N first images are analyzed according to the brightness of each first image and the first proportion value of the first target object in the first image, so as to obtain N first operation results; the N first operation results are selected to meet the preset conditions, for example, the first operation result closest to the first preset result is selected, the first image corresponding to the first operation result is determined, and the shooting angle adopted when the first image is shot is output.
The present invention provides a sixth embodiment of an electronic device, comprising: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
Fig. 14 is a schematic structural diagram of a sixth embodiment of an electronic device according to the present invention; as shown in fig. 14, the electronic device further includes:
the acquisition unit 61 is used for acquiring N first images of the first main body;
here, the first body may be a human body, a scene, or the like; taking a first main body as an example of a human body, the electronic device moves while shooting, and in the shooting track, the acquisition unit 61 shoots a first image including the human body every time a preset shooting angle is changed; at the N shooting angles, the acquisition unit 61 shoots N first images together, or the acquisition unit 61 continuously shoots N first images of the human body during the movement.
A first obtaining unit 62, configured to obtain a first parameter set by detecting, by the sensing unit, a relative relationship between the first main body and the electronic device, where the relative relationship corresponds to each first image, where the first parameter set includes N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information used when each first image is captured; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, the first obtaining unit 62 obtains a certain number of feature points in each first image through the sensing unit, detects position changes of the feature points in the first image, obtains a relative relationship between the first main body corresponding to each first image and the electronic device, and obtains the first parameter set.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. The first obtaining unit 62 may obtain the image parameter information and the motion parameter information at the time of capturing the first image, by the difference in the position of the feature point in the first image.
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, the first obtaining unit 62 can obtain the image parameter information and the motion parameter information employed when the first image is captured.
A first analyzing unit 63, configured to obtain a first brightness parameter of each first image in the N first images; acquiring a first target object in each first image; acquiring at least three target points of each of the N first target objects; generating corresponding first sub-images according to the acquired at least three target points of each first target object; calculating a second proportion value of each first sub-image in the corresponding first image; acquiring a first weight and a third weight distributed for each first image; performing first operation on the obtained N first brightness parameters and the corresponding first weight values to obtain a first sub-result of the corresponding first image; performing first operation on the obtained N second proportional values and the corresponding third weight values to obtain a third sub-result of the corresponding first image; performing second operation on the first sub-result and the third sub-result of each first image to obtain a second operation result of the first image; determining a second operation result closest to a second preset result in the obtained N second operation results; acquiring a sub-parameter corresponding to the determined second operation result, and triggering the first output unit 64;
a first output unit 64 for outputting first information including at least a sub-parameter corresponding to the first image satisfying the predetermined condition; wherein N is a positive integer greater than or equal to 1;
in the above scheme, the first target object may be a human body, or may be a scene in a shooting environment, such as a pendulum; taking the first target object as a human body and the 5 th first image as an example, the first analyzing unit 63 obtains the brightness parameters of the 5 th first image, the first analyzing unit 63 takes the ears, eyes, nose, mouth, eyebrows and other five sense organs of the human body in the 5 th first image as a target point, and selects three target points such as the nose, mouth and eyebrows; connecting two target points of the three selected target points to obtain a first sub-image, calculating a proportion value of the first sub-image in the 5 th first image, and taking the proportion value as a second proportion value; the first analyzing unit 63 multiplies the first weight value allocated to the image by the brightness parameter to obtain a first sub-result of the 5 th first image; multiplying a third weight value distributed for the image by the second proportional value to obtain a third sub-result; and adding the first sub-result and the third sub-result to obtain a second operation result, and repeating the steps to obtain N second operation results in the total number of the N first images. The first weight and the third weight are preset and can be flexibly set according to the actual application condition.
Among the N second operation results, the first parsing unit 63 selects a second operation result closest to a second predetermined result, determines a first image with the second operation result, searches for a shooting angle corresponding to the first image in a first parameter set, and triggers the first output unit 64 to output the shooting angle; preferably, the first brightness parameter and/or the first image may be output when the photographing angle is output.
The function realized by the first analyzing unit 63 may be used as the first analyzing unit 63 to analyze the N first images to obtain a first analysis result; when the first parsing result satisfies a predetermined condition, further description of the first output unit 64 is triggered.
Therefore, in the embodiment of the invention, the acquired N first images are analyzed according to the brightness of each first image and the second proportion value of the first sub-image in the first image, so as to obtain N second operation results; in the N second operation results, the second operation result which meets the preset conditions, such as the second operation result which is closest to the second preset result is selected, the first image corresponding to the second operation result is determined, and the shooting angle adopted when the first image is shot is output, so that the electronic equipment can determine the most suitable image collecting position according to the relative relation with the shooting main body, provide more suitable collecting parameters, improve the user experience and highlight the diversity of the functions of the electronic equipment.
The present invention provides a seventh embodiment of an electronic device, comprising: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
Fig. 15 is a schematic structural diagram of a seventh embodiment of an electronic device according to the invention; as shown in fig. 15, the electronic device further includes:
an acquisition unit 71 for acquiring N first images of the first subject;
here, the first body may be a human body, a scene, or the like; taking a first subject as an example of a human body, the electronic device moves while shooting, and in the shooting track, the acquisition unit 71 shoots a first image including the human body every time a preset shooting angle is changed; at the N shooting angles, the acquisition unit 71 shoots N first images together, or the acquisition unit 71 continuously shoots N first images of the human body in the moving process.
A first obtaining unit 72, configured to obtain a first parameter set by detecting, by the sensing unit, a relative relationship between the first main body and the electronic device, where the relative relationship corresponds to each first image, where the first parameter set includes N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information used when each first image is captured; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, the first obtaining unit 72 obtains a certain number of feature points in each first image through the sensing unit, detects position changes of the feature points in the first image, obtains a relative relationship between the first main body corresponding to each first image and the electronic device, and obtains the first parameter set.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. The first obtaining unit 72 may obtain the image parameter information and the motion parameter information at the time of capturing the first image, by the difference in the position of the feature point in the first image.
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, the first obtaining unit 72 can obtain the image parameter information and the motion parameter information employed when the first image is captured.
A first analyzing unit 73, configured to obtain, in the N first images, a first target object in each first image; calculating a first proportion value of each first target object in the corresponding first image; acquiring at least three target points of each of the N first target objects; generating corresponding first sub-images according to the acquired at least three target points of each first target object; calculating a second proportion value of each first sub-image in the corresponding first image; acquiring a second weight and a third weight distributed for each first image; performing first operation on the obtained N first proportional values and the corresponding second weight values to obtain a second sub-result corresponding to the first image; performing first operation on the obtained N second proportional values and the corresponding third weight values to obtain a third sub-result of the corresponding first image; performing second operation on the second sub-result and the third sub-result of each first image to obtain a third operation result of the first image; determining a third operation result closest to a third preset result in the obtained N third operation results; acquiring a sub-parameter corresponding to the determined third operation result, and triggering the first output unit 74;
a first output unit 74 for outputting first information including at least a sub-parameter corresponding to the first image satisfying the predetermined condition; wherein N is a positive integer greater than or equal to 1;
in the above scheme, the first target object may be a human body, or may be a scene in a shooting environment, such as a pendulum; taking the first target object as a human body and the 5 th first image as an example, the first analyzing unit 73 obtains a ratio value of the human body in the 5 th first image to the image, and regards the ratio value as a first ratio value; the first analyzing unit 73 takes the ears, eyes, nose, mouth, eyebrows, etc. of the human body in the 5 th first image as target points, and selects three target points, such as nose, mouth, eyebrows, among the five target points; connecting two target points of the three selected target points to obtain a first sub-image, calculating a proportion value of the first sub-image in the 5 th first image, and taking the proportion value as a second proportion value; the first analyzing unit 73 multiplies the second weight value allocated to the image by the first proportional value to obtain a second sub-result of the 5 th image; multiplying a third weight value distributed for the image by the second proportional value to obtain a third sub-result; the first analysis unit 73 adds the second sub-result and the third sub-result to obtain a third operation result, and so on, and N first images obtain N second operation results in total. The second weight and the third weight are preset and can be flexibly set according to the actual application condition.
Among the N third operation results, the first parsing unit 73 selects a third operation result closest to a third predetermined result, determines a first image having the third operation result, searches for a shooting angle corresponding to the first image in the first parameter set, and triggers the first output unit 74 to output the shooting angle; preferably, when the first output unit 74 outputs the photographing angle, the first brightness parameter and/or the first image, etc. may also be output.
The function realized by the first analyzing unit 73 may be that the first analyzing unit 73 analyzes the N first images to obtain a first analysis result; when the first parsing result satisfies a predetermined condition, further description of the first output unit 74 is triggered.
Therefore, in the embodiment of the present invention, the N acquired first images are analyzed according to the first ratio of the first target object in the first image and the second ratio of the first sub-image in the first image, so as to obtain N third operation results; and selecting a third operation result which meets the preset condition, such as the third operation result closest to the third preset result, from the N third operation results, determining the first image corresponding to the third operation result, and outputting the shooting angle adopted when the first image is shot, so that the electronic equipment can determine the most suitable image acquisition position according to the relative relation with the shooting main body, provide more suitable acquisition parameters, improve the user experience and highlight the diversity of the functions of the electronic equipment.
The present invention provides an eighth embodiment of an electronic device, comprising: the device comprises a collecting unit and a sensing unit; the acquisition unit can be a camera; the sensing unit may be a sensor such as a gravity sensor and/or an acceleration sensor.
Fig. 16 is a schematic structural diagram of an eighth embodiment of an electronic device according to the invention; as shown in fig. 16, the electronic apparatus further includes:
an acquisition unit 81 for acquiring N first images of the first subject;
here, the first body may be a human body, a scene, or the like; taking a first main body as an example of a human body, the electronic device moves while shooting, and in the shooting track, the acquisition unit 81 shoots a first image including the human body every time a preset shooting angle is changed; at the N shooting angles, the acquisition unit 81 shoots N first images together, or the acquisition unit 81 continuously shoots N first images of the human body in the moving process.
A first obtaining unit 82, configured to obtain a first parameter set by detecting, by the sensing unit, a relative relationship between the first main body and the electronic device, where the relative relationship corresponds to each first image, where the first parameter set includes N sub-parameters corresponding to the N first images;
here, the sub-parameters may be specifically image parameter information and motion parameter information used when each first image is captured; the first parameter set includes a set of image parameter information and motion parameter information corresponding to each first image.
Further, the first obtaining unit 82 obtains a certain number of feature points in each first image through the sensing unit, detects position changes of the feature points in the first image, obtains a relative relationship between the first main body corresponding to each first image and the electronic device, and obtains the first parameter set.
Specifically, a certain number of feature points in each first image are acquired by the camera, and the feature points may be selected from areas with different colors and/or different brightness in the first image. The first obtaining unit 82 may obtain the image parameter information and the motion parameter information at the time of capturing the first image, by the difference in the position of the feature point in the first image.
Preferably, a gravity sensor and/or an acceleration sensor are used for detecting the relative position and/or the relative orientation between the human body and the electronic equipment and/or the spatial attitude information of the electronic equipment, so that the image parameter information and the motion parameter information at the current shooting position can be acquired; specifically, the set point of the mobile phone may be used as a reference point, that is, a relative zero point, and the gravity sensor and/or the acceleration sensor of the mobile phone may be used to detect the three-dimensional motion vector of the shooting track, so as to obtain the relative distance and/or the relative orientation between the human body and the mobile phone, and the spatial attitude of the mobile phone. The image parameter information may be obtained by acquiring brightness information of the first image at the current photographing position by the camera. In this way, the first obtaining unit 82 can obtain the image parameter information and the motion parameter information employed when the first image is captured.
A first analyzing unit 83, configured to obtain a first brightness parameter of the first image from the N first images; acquiring a first target object in each first image; calculating a first proportion value of each first target object in the corresponding first image; acquiring at least three target points of each of the N first target objects; generating corresponding first sub-images according to the acquired at least three target points of each first target object; calculating a second proportion value of each first sub-image in the corresponding first image; acquiring a first weight, a second weight and a third weight distributed for each first image; performing first operation on the obtained N first brightness parameters and the corresponding first weight values to obtain a first sub-result of the corresponding first image; performing first operation on the obtained N first proportional values and the corresponding second weight values to obtain a second sub-result corresponding to the first image; performing first operation on the obtained N second proportional values and the corresponding third weight values to obtain a third sub-result of the corresponding first image; performing second operation on the first sub-result, the second sub-result and the third sub-result of each first image to obtain a fourth operation result of the first image; determining a fourth operation result closest to a fourth predetermined result from the N fourth operation results; acquiring a sub-parameter corresponding to the determined fourth operation result, and triggering the first output unit 84;
a first output unit 84 for outputting first information including at least a sub-parameter corresponding to the first image satisfying the predetermined condition; wherein N is a positive integer greater than or equal to 1;
in the above scheme, the first target object may be a human body, or may be a scene in a shooting environment, such as a pendulum; taking a first target object as a human body and a 5 th first image as an example, the first analyzing unit 83 obtains the brightness parameter of the 5 th first image; acquiring a ratio value of the human body in the 5 th first image to the image, and taking the ratio value as a first ratio value; taking the ears, eyes, nose, mouth, eyebrows and other five sense organs of the human body in the 5 th first image as target points respectively, and selecting three target points such as the nose, the mouth and the eyebrows from the five target points; connecting two target points of the three selected target points to obtain a first sub-image, calculating a proportion value of the first sub-image in the 5 th first image, and taking the proportion value as a second proportion value; the first analyzing unit 83 multiplies the first weight value allocated to the image by the brightness parameter to obtain a first sub-result of the 5 th first image; multiplying the second weight value distributed to the image by the first proportional value to obtain a second sub-result of the 5 th first image; multiplying a third weight value distributed for the image by the second proportional value to obtain a third sub-result; the first parsing unit 83 adds the first sub-result, the second sub-result, and the third sub-result to obtain a fourth operation result, and so on, and N first images obtain N fourth operation results. The first weight, the second weight and the third weight are preset and can be flexibly set according to the actual application condition.
Among the N fourth operation results, the first parsing unit 83 selects a fourth operation result closest to a fourth predetermined result, determines a first image with the fourth operation result, searches for a shooting angle corresponding to the first image in the first parameter set, and triggers the first output unit 84 to output the shooting angle; preferably, when the first output unit 84 outputs the photographing angle, the first luminance parameter and/or the first image, etc. may also be output.
The function realized by the first analyzing unit 83 may be that the first analyzing unit 83 analyzes the N first images to obtain a first analysis result; when the first parsing result satisfies a predetermined condition, further description of the first output unit 84 is triggered.
Therefore, in the embodiment of the present invention, the N acquired first images are analyzed according to the brightness parameter of the first image, the first ratio of the first target object to the first image, and the second ratio of the first sub-image to the first image, so as to obtain N fourth operation results; and selecting a fourth operation result which meets the preset condition, such as selecting the fourth operation result closest to the fourth preset result, determining the first image corresponding to the fourth operation result, and outputting the shooting angle adopted when the first image is shot, so that the electronic equipment can determine the most suitable image acquisition position according to the relative relation with the shooting main body, provide more suitable acquisition parameters, improve the user experience and highlight the diversity of the functions of the electronic equipment.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (11)

1. An information processing method is applied to an electronic device, and the electronic device comprises: the device comprises a collecting unit and a sensing unit; the method comprises the following steps:
acquiring N first images of a first main body through the acquisition unit based on different shooting angles in a shooting track;
detecting the relative relation between the first main body corresponding to each first image and the electronic equipment through the sensing unit to obtain a first parameter set, wherein the first parameter set comprises N sub-parameters corresponding to the N first images;
analyzing the N first images to obtain a first analysis result;
when the first analysis result meets a preset condition, outputting first information, wherein the first information at least comprises a sub-parameter corresponding to the first image meeting the preset condition, and the first information comprises a shooting angle;
wherein N is a positive integer greater than or equal to 1.
2. The method according to claim 1, wherein the detecting, by the sensing unit, a relative relationship between the first subject corresponding to each first image and the electronic device to obtain a first parameter set comprises:
acquiring a certain number of feature points in each first image through the sensing unit, detecting position changes of the feature points in the first image, obtaining a relative relation between the first main body corresponding to each first image and the electronic equipment, and obtaining the first parameter set.
3. The method according to claim 1, wherein the parsing the N first images to obtain a first parsing result comprises:
acquiring a first brightness parameter of each first image in the N first images;
correspondingly, when the first parsing result meets a predetermined condition, outputting first information, including:
determining the first brightness parameter closest to a predetermined condition among the N first brightness parameters;
and obtaining the sub-parameter corresponding to the determined first brightness parameter, and outputting the first information at least comprising the sub-parameter.
4. The method according to claim 1, wherein the parsing the N first images to obtain a first parsing result comprises:
acquiring a first target object in each first image from the N first images;
calculating a first proportion value of each first target object in the corresponding first image;
correspondingly, when the first parsing result meets a predetermined condition, outputting first information, including:
determining a first proportional value closest to a first predetermined value;
obtaining a sub-parameter corresponding to the determined first scale value, and outputting the first information at least including the sub-parameter.
5. The method according to claim 1, wherein the parsing the N first images to obtain a first parsing result comprises:
acquiring a first target object in each first image from the N first images;
acquiring at least three target points of each of the N first target objects;
generating corresponding first sub-images according to the acquired at least three target points of each first target object;
calculating a second proportion value of each first sub-image in the corresponding first image;
correspondingly, when the first parsing result meets a predetermined condition, outputting first information, including:
determining a second proportional value closest to a second predetermined value;
obtaining a sub-parameter corresponding to the determined second ratio value, and outputting the first information at least including the sub-parameter.
6. The method according to claim 1, wherein the parsing the N first images to obtain a first parsing result comprises:
acquiring a first brightness parameter of each first image in the N first images;
acquiring a first target object in each first image;
calculating a first proportion value of each first target object in the corresponding first image;
acquiring a first weight and a second weight distributed for each first image;
performing first operation on the obtained N first brightness parameters and the corresponding first weight values to obtain a first sub-result of the corresponding first image;
performing first operation on the obtained N first proportional values and the corresponding second weight values to obtain a second sub-result corresponding to the first image;
performing second operation on the first sub-result and the second sub-result of each first image to obtain a first operation result of the first image;
correspondingly, when the first parsing result meets a predetermined condition, outputting first information, including:
determining a first operation result closest to a first preset result in the obtained N first operation results;
obtaining a sub-parameter corresponding to the determined first operation result, and outputting the first information at least including the sub-parameter.
7. The method according to claim 1, wherein the parsing the N first images to obtain a first parsing result comprises:
acquiring a first brightness parameter of each first image in the N first images;
acquiring a first target object in each first image;
acquiring at least three target points of each of the N first target objects;
generating corresponding first sub-images according to the acquired at least three target points of each first target object;
calculating a second proportion value of each first sub-image in the corresponding first image;
acquiring a first weight and a third weight distributed for each first image;
performing first operation on the obtained N first brightness parameters and the corresponding first weight values to obtain a first sub-result of the corresponding first image;
performing first operation on the obtained N second proportional values and the corresponding third weight values to obtain a third sub-result of the corresponding first image;
performing second operation on the first sub-result and the third sub-result of each first image to obtain a second operation result of the first image;
correspondingly, when the first parsing result meets a predetermined condition, outputting first information, including:
determining a second operation result closest to a second preset result in the obtained N second operation results;
obtaining a sub-parameter corresponding to the determined second operation result, and outputting the first information at least including the sub-parameter.
8. The method according to claim 1, wherein the parsing the N first images to obtain a first parsing result comprises:
acquiring a first target object in each first image from the N first images;
calculating a first proportion value of each first target object in the corresponding first image;
acquiring at least three target points of each of the N first target objects;
generating corresponding first sub-images according to the acquired at least three target points of each first target object;
calculating a second proportion value of each first sub-image in the corresponding first image;
acquiring a second weight and a third weight distributed for each first image;
performing first operation on the obtained N first proportional values and the corresponding second weight values to obtain a second sub-result corresponding to the first image;
performing first operation on the obtained N second proportional values and the corresponding third weight values to obtain a third sub-result of the corresponding first image;
performing second operation on the second sub-result and the third sub-result of each first image to obtain a third operation result of the first image;
correspondingly, when the first parsing result meets a predetermined condition, outputting first information, including:
determining a third operation result closest to a third preset result in the obtained N third operation results;
obtaining a sub-parameter corresponding to the determined third operation result, and outputting the first information at least including the sub-parameter.
9. The method according to claim 1, wherein the parsing the N first images to obtain a first parsing result comprises:
acquiring a first brightness parameter of the first image from the N first images;
acquiring a first target object in each first image;
calculating a first proportion value of each first target object in the corresponding first image;
acquiring at least three target points of each of the N first target objects;
generating corresponding first sub-images according to the acquired at least three target points of each first target object;
calculating a second proportion value of each first sub-image in the corresponding first image;
acquiring a first weight, a second weight and a third weight distributed for each first image;
performing first operation on the obtained N first brightness parameters and the corresponding first weight values to obtain a first sub-result of the corresponding first image;
performing first operation on the obtained N first proportional values and the corresponding second weight values to obtain a second sub-result corresponding to the first image;
performing first operation on the obtained N second proportional values and the corresponding third weight values to obtain a third sub-result of the corresponding first image;
performing second operation on the first sub-result, the second sub-result and the third sub-result of each first image to obtain a fourth operation result of the first image;
correspondingly, when the first parsing result meets a predetermined condition, outputting first information, including:
determining a fourth operation result closest to a fourth predetermined result from the N fourth operation results;
obtaining a sub-parameter corresponding to the determined fourth operation result, and outputting the first information at least including the sub-parameter.
10. An electronic device, the electronic device comprising: the device comprises a collecting unit, a sensing unit, a first obtaining unit, a first analyzing unit and a first output unit; wherein,
the acquisition unit is used for acquiring N first images of the first main body based on different shooting angles in a shooting track;
the first obtaining unit is configured to obtain a first parameter set by detecting, by the sensing unit, a relative relationship between the first main body and the electronic device, where the relative relationship corresponds to each first image, where the first parameter set includes N sub-parameters corresponding to the N first images;
the first analysis unit is used for analyzing the N first images to obtain a first analysis result; when the first analysis result meets a preset condition, triggering a first output unit;
the first output unit is used for outputting first information, the first information at least comprises a sub-parameter corresponding to the first image meeting the preset condition, and the first information comprises a shooting angle;
wherein N is a positive integer greater than or equal to 1.
11. The electronic device of claim 10, wherein the first obtaining unit is further configured to:
acquiring a certain number of feature points in each first image through the sensing unit, detecting position changes of the feature points in the first image, obtaining a relative relation between the first main body corresponding to each first image and the electronic equipment, and obtaining the first parameter set.
CN201410312119.2A 2014-07-02 2014-07-02 A kind of information processing method and electronic equipment Active CN104135610B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410312119.2A CN104135610B (en) 2014-07-02 2014-07-02 A kind of information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410312119.2A CN104135610B (en) 2014-07-02 2014-07-02 A kind of information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN104135610A CN104135610A (en) 2014-11-05
CN104135610B true CN104135610B (en) 2019-05-31

Family

ID=51808124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410312119.2A Active CN104135610B (en) 2014-07-02 2014-07-02 A kind of information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN104135610B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104883495B (en) * 2015-04-30 2018-05-29 广东欧珀移动通信有限公司 A kind of photographic method and device
CN104902172A (en) * 2015-05-19 2015-09-09 广东欧珀移动通信有限公司 Determination method of shooting position and shooting terminal
CN105117903B (en) * 2015-08-31 2020-03-24 联想(北京)有限公司 Information processing method and electronic equipment
CN108259817B (en) * 2016-12-28 2021-03-19 南宁富桂精密工业有限公司 Picture shooting system and method
CN108596666B (en) * 2018-04-24 2021-11-30 重庆艾里芸信息科技(集团)有限公司 Promotion and sale system for glasses
CN111127558B (en) * 2019-12-20 2023-06-27 北京理工大学 Method and device for determining assembly detection angle, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731828A (en) * 2004-08-05 2006-02-08 索尼公司 Image pickup apparatus, method of controlling image pickup and program
CN101751562A (en) * 2009-12-28 2010-06-23 镇江奇点软件有限公司 Bank transaction image forensic acquiring method based on face recognition
CN102377905A (en) * 2010-08-18 2012-03-14 佳能株式会社 Image pickup apparatus and control method therefor
CN103546672A (en) * 2013-11-07 2014-01-29 苏州君立软件有限公司 Image collecting system
CN103813088A (en) * 2012-11-13 2014-05-21 联想(北京)有限公司 Information processing method and electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5026484B2 (en) * 2009-09-17 2012-09-12 シャープ株式会社 Portable terminal device, image output device, captured image processing system, control method for portable terminal device, image output method, program, and recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731828A (en) * 2004-08-05 2006-02-08 索尼公司 Image pickup apparatus, method of controlling image pickup and program
CN101751562A (en) * 2009-12-28 2010-06-23 镇江奇点软件有限公司 Bank transaction image forensic acquiring method based on face recognition
CN102377905A (en) * 2010-08-18 2012-03-14 佳能株式会社 Image pickup apparatus and control method therefor
CN103813088A (en) * 2012-11-13 2014-05-21 联想(北京)有限公司 Information processing method and electronic device
CN103546672A (en) * 2013-11-07 2014-01-29 苏州君立软件有限公司 Image collecting system

Also Published As

Publication number Publication date
CN104135610A (en) 2014-11-05

Similar Documents

Publication Publication Date Title
CN104135610B (en) A kind of information processing method and electronic equipment
CN104205804B (en) Image processing apparatus, filming apparatus and image processing method
CN104967803B (en) A kind of video recording method and device
JP4778306B2 (en) Matching asynchronous image parts
CN106170978B (en) Depth map generation device, method and non-transitory computer-readable medium
CN107370942B (en) Photographing method, photographing device, storage medium and terminal
CN107111885A (en) For the method for the position for determining portable set
US9854174B2 (en) Shot image processing method and apparatus
US20150002518A1 (en) Image generating apparatus
CN107787463B (en) The capture of optimization focusing storehouse
CN105227855B (en) A kind of image processing method and terminal
KR102337209B1 (en) Method for notifying environmental context information, electronic apparatus and storage medium
EP3206188A1 (en) Method and system for realizing motion-sensing control based on intelligent device, and intelligent device
CN105120153B (en) A kind of image capturing method and device
CN112511743B (en) Video shooting method and device
CN105678696B (en) A kind of information processing method and electronic equipment
CN109981967B (en) Shooting method and device for intelligent robot, terminal equipment and medium
CN108989666A (en) Image pickup method, device, mobile terminal and computer-readable storage medium
JP2020072349A (en) Image processing device and image processing method
CN105143816A (en) Three-dimensional shape measurement device, three-dimensional shape measurement method, and three-dimensional shape measurement program
CN103826061B (en) Information processing method and electronic device
CN114697516B (en) Three-dimensional model reconstruction method, apparatus and storage medium
JP7293362B2 (en) Imaging method, device, electronic equipment and storage medium
CN103945120B (en) Electronic equipment and information processing method
CN104298442A (en) Information processing method and electronic device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant