CN105812645B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN105812645B
CN105812645B CN201410848640.8A CN201410848640A CN105812645B CN 105812645 B CN105812645 B CN 105812645B CN 201410848640 A CN201410848640 A CN 201410848640A CN 105812645 B CN105812645 B CN 105812645B
Authority
CN
China
Prior art keywords
image
ith
area
obtaining
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410848640.8A
Other languages
Chinese (zh)
Other versions
CN105812645A (en
Inventor
李众庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410848640.8A priority Critical patent/CN105812645B/en
Publication of CN105812645A publication Critical patent/CN105812645A/en
Application granted granted Critical
Publication of CN105812645B publication Critical patent/CN105812645B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an information processing method and electronic equipment, which aim to achieve the technical effect that the visual effect of each area of an image is basically consistent with the real effect of the area, and further achieve the effects of improving the visual experience of a user and improving the experience of the user. The method comprises the following steps: obtaining N images of the first object, wherein N is a positive integer; displaying a first display image corresponding to the first object in the predetermined area based on the N images; detecting and obtaining a first operation of a first user on the first display image; when the first operation is judged to be an operation in accordance with a preset rule, obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image, wherein i is a positive integer less than or equal to N; and responding to the first signal, acquiring the ith image with the ith image parameter in the N images corresponding to the ith sub-area, and displaying the ith image in the preset area.

Description

Information processing method and electronic equipment
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to an information processing method and an electronic device.
Background
High-Dynamic Range images HDR (High-Dynamic Range) can provide more Dynamic Range and image details than ordinary images, and according to the collected Low-Dynamic Range images LDR (Low-Dynamic Range) with different exposure times, the final HDR image is synthesized by corresponding the LDR image with the best details to each exposure time, so as to better reflect the visual effect in the real environment. When a corresponding first object HDR image is obtained for a first object, in the related art, when an image of the first object needs to be displayed on a display unit, only the first object HDR image is displayed.
In the process of implementing the technical solution of the embodiment of the present application, the inventor of the present application finds that at least the following technical problems exist in the prior art:
in the HDR image in the prior art, a plurality of fixed exposure values are taken according to a preset rule to capture a plurality of images, and then the plurality of images are synthesized to obtain the HDR image, but the plurality of exposure values are not accurate exposure values of different brightness regions in a captured region in a normal condition, so that the actual dynamic range in the captured region cannot be completely covered. However, in an actual scene, particularly for panoramic shooting, the contrast between the highlight part and the dark part is large, and the actual dynamic range in the shooting area cannot be completely covered. It can be seen that the HDR processing in the related art has a technical problem that the actual dynamic range in the shooting area cannot be fully covered.
Further, since in the prior art only the first object HDR image is displayed when the image of the first object needs to be displayed. Therefore, in the first object HDR image displayed in the prior art, there is a technical problem that the visual effect of each region of the first object is different from the real effect of each region on the first object.
Disclosure of Invention
The invention provides an information processing method and electronic equipment, which are used for solving the technical problem that in a first object HDR image displayed in the prior art, the visual effect of each area of a first object is different from the real effect of each area on the first object, so that the technical effect that the visual effect of each area is basically consistent with the real effect of the area is realized. Therefore, the effects of improving the visual experience of the user and improving the experience of the user are achieved.
In one aspect, an embodiment of the present application provides an information processing method applied to an electronic device, where the electronic device is capable of displaying an image of a first object in a predetermined area, and the method includes:
obtaining N images of the first object, wherein N is a positive integer;
displaying a first display image corresponding to the first object in the predetermined area based on the N images;
detecting and obtaining a first operation of a first user on the first display image;
when the first operation is judged to be an operation in accordance with a preset rule, obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image, wherein i is a positive integer less than or equal to N;
and responding to the first signal, acquiring the ith image with the ith image parameter in the N images corresponding to the ith sub-area, and displaying the ith image in the preset area.
Optionally, the detecting to obtain a first operation performed on the first display image by a first user specifically includes:
and detecting and obtaining the gaze action of the eyeball of the first user.
Optionally, when it is determined that the first operation is an operation in accordance with a predetermined rule, obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image specifically includes:
responding to the gazing action, and obtaining current gazing information corresponding to the gazing action, wherein the current gazing information comprises a current gazing position and a continuous gazing duration;
when the continuous watching duration is greater than or equal to a first preset duration, determining the watching action as the operation meeting a preset rule;
and obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image based on the corresponding relation between the current watching position and the N sub-areas of the first object.
Optionally, the detecting to obtain a first operation performed on the first display image by a first user specifically includes:
and detecting and obtaining the amplification operation of the first user on the first display image.
Optionally, when it is determined that the first operation is an operation in accordance with a predetermined rule, obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image specifically includes:
responding to the amplification operation, and obtaining amplification operation information, wherein the amplification operation information comprises a current amplification position and a first ratio between the display area of an amplification area and the area of the preset area;
when the first ratio is larger than a preset value, determining that the amplification operation is the operation in accordance with a preset rule;
and obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image based on the corresponding relation between the current amplification position and the N sub-areas of the first object.
Optionally, when the ith image parameter is an ith acquisition parameter value, and when the ith acquisition parameter value is the ith acquisition parameter value, an ith sub-region in the ith image is in an accurate acquisition state, and the obtaining N images of the first object specifically includes:
sequentially taking i as 1 to N, detecting the first object, and obtaining the ith acquisition parameter value corresponding to the ith sub-region in the first object;
obtaining the ith image based on the ith acquisition parameter value;
and when i is N, obtaining the N images.
Optionally, the displaying a first display image corresponding to the first object in the predetermined area based on the N images specifically includes:
displaying a composite image synthesized based on the N images in the preset area, wherein the composite image is the first display image; or
And displaying any one image in the N images in the preset area, wherein the any one image is the first display image.
Optionally, after acquiring, in response to the first signal, an ith image having an ith image parameter from the N images corresponding to the ith sub-region and displaying the ith image in the predetermined region, the method further includes:
detecting and obtaining a second operation of the first user on the ith image;
when the second operation is judged to be the operation according with the preset rule, a second signal representing that the second operation corresponds to a jth sub-area in the ith image is obtained, wherein j is a positive integer less than or equal to N;
and responding to the second signal, acquiring a jth image with a jth image parameter in the N images corresponding to the jth sub-area, and displaying the jth image in the preset area.
On the other hand, an embodiment of the present application further provides an electronic device, where the electronic device is capable of displaying an image of a first object in a predetermined area, and the electronic device includes:
an obtaining module, configured to obtain N images of the first object, where N is a positive integer;
a first display module, configured to display a first display image corresponding to the first object in the predetermined area based on the N images;
the first sensing module is used for detecting and obtaining a first operation of a first user on the first display image;
the first judging module is used for obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image when the first operation is judged to be the operation in accordance with a preset rule, wherein i is a positive integer less than or equal to N;
and the second display module is used for responding to the first signal, acquiring the ith image with the ith image parameter in the N images corresponding to the ith sub-area, and displaying the ith image in the preset area.
Optionally, the first sensing module specifically includes:
and the first sensing submodule is used for detecting and acquiring the gazing action of the eyeballs of the first user.
Optionally, the first determining module specifically includes:
the first judgment submodule is used for responding to the gazing action and obtaining current gazing information corresponding to the gazing action, wherein the current gazing information comprises a current gazing position and a continuous gazing duration;
the second judgment submodule is used for determining the gazing action as the operation which accords with a preset rule when the continuous gazing duration is greater than or equal to a first preset duration;
and the third judgment sub-module is used for acquiring a first signal representing that the first operation corresponds to the ith sub-region in the first display image based on the corresponding relation between the current gaze position and the N sub-regions of the first object.
Optionally, the first sensing module specifically includes:
and the second sensing submodule is used for detecting and obtaining the amplification operation of the first user on the first display image.
Optionally, the first determining module specifically includes:
the fourth judgment submodule is used for responding to the amplification operation and obtaining amplification operation information, wherein the amplification operation information comprises a current amplification position and a first ratio between the display area of an amplification area and the area of the preset area;
a fifth judgment sub-module, configured to determine that the amplification operation is the operation that meets a predetermined rule when the first ratio is greater than a preset value;
and the sixth judgment submodule is used for acquiring a first signal representing that the first operation corresponds to the ith sub-area in the first display image based on the corresponding relation between the current amplification position and the N sub-areas of the first object.
Optionally, when the ith image parameter is an ith acquisition parameter value, and when the ith acquisition parameter value is the ith acquisition parameter value, the ith sub-area in the ith image is in an accurate acquisition state, the obtaining module specifically includes:
the first acquisition submodule is used for sequentially taking i as 1 to N, detecting the first object and acquiring the ith acquisition parameter value corresponding to the ith sub-area in the first object;
the second acquisition submodule is used for acquiring the ith image based on the ith acquisition parameter value;
and when i is N, obtaining the N images.
Optionally, the first display module specifically includes:
a first display sub-module, configured to display, in the predetermined area, a composite image that is synthesized based on the N images, where the composite image is the first display image; or
And the second display sub-module is used for displaying any one of the N images in the preset area, wherein the any one image is the first display image.
When the display area of the preset area is smaller than the area of any one image and is spliced with an adjacent image, matching and splicing of two images close to the exposure parameter and gradual transition processing of the images are further included.
Optionally, when an image is exposed in a partitioned manner, the smaller the partition of the area, the closer the actual real effect is.
Optionally, the electronic device further includes:
the second sensing module is used for detecting and obtaining second operation of the first user on the ith image;
a second judging module, configured to, when it is judged that the second operation is an operation that meets the predetermined rule, obtain a second signal that represents that the second operation corresponds to a jth sub-region in the ith image, where j is a positive integer less than or equal to N;
and the third display module is used for responding to the second signal, acquiring a jth image with a jth image parameter in the N images corresponding to the jth sub-area, and displaying the jth image in the preset area.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
1. according to the technical scheme in the embodiment of the application, the object to be shot is subjected to photosensitive detection in different areas, so that the accurate exposure value of each area is obtained, and the image under the accurate exposure value of the area is obtained. The image information of each area can be completely reserved. Since a plurality of images of the subject at different exposure values are finally obtained. Therefore, the obtained image information can completely cover the actual dynamic range in the shooting area, and the technical problem that the HDR processing in the prior art cannot completely cover the actual dynamic range in the shooting area can be solved. And further, the technical effects of improving the dynamic range of the image and retaining all detail information of the image are achieved.
2. According to the technical scheme in the embodiment of the application, the image display details in the sight line range of the user are rich in a mode of simulating human eyes. Therefore, the technical problem that the display effect of each area of the first object is different from the real effect of each area on the first object in the prior art can be solved. And realizing the technical effect that the visual effect of each area is basically consistent with the real effect of the area. Therefore, the effects of improving the visual experience of the user and improving the experience of the user are achieved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
Fig. 1 is a flowchart of an information processing method according to a first embodiment of the present application;
FIG. 2 is a diagram illustrating a first object according to a first embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a specific implementation of step S101 in an embodiment of the present application;
fig. 4 is a flowchart of a first implementation manner of steps S103 and S104 in the first embodiment of the present application;
fig. 5 is a flowchart of a second implementation manner of steps S103 and S104 in the first embodiment of the present application;
fig. 6 is a schematic position diagram of a first enlarging operation in a second implementation manner of steps S103 and S104 in the first embodiment of the application;
fig. 7 is a schematic position diagram of a second enlarging operation in a second implementation manner of steps S103 and S104 in the first embodiment of the application;
fig. 8 is a functional structure diagram of an electronic device according to a second embodiment of the present application.
Detailed Description
The embodiment of the application provides an information processing method and electronic equipment, which are used for solving the technical problem that in a first object HDR image displayed in the prior art, the visual effect of each region of a first object is different from the real effect of each region on the first object, so that the technical effect that the visual effect of each region is basically consistent with the real effect of the region is achieved. Therefore, the effects of improving the visual experience of the user and improving the experience of the user are achieved.
In order to solve the technical problems, the general idea of the embodiment of the present application is as follows:
the information processing method is applied to an electronic device, the electronic device can display an image of a first object in a predetermined area, and the method comprises the following steps:
obtaining N images of the first object, wherein N is a positive integer;
displaying a first display image corresponding to the first object in the predetermined area based on the N images;
detecting and obtaining a first operation of a first user on the first display image;
when the first operation is judged to be an operation in accordance with a preset rule, obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image, wherein i is a positive integer less than or equal to N;
and responding to the first signal, acquiring the ith image with the ith image parameter in the N images corresponding to the ith sub-area, and displaying the ith image in the preset area.
In the technical scheme, the method adopts a plurality of images to completely record all image information of the object to be displayed, and displays an image containing the most abundant detail information of the watched area in the sight range of the user. Therefore, the technical effect that the visual effect of each region is basically consistent with the real effect of the region can be achieved, and the effects of improving the visual experience of the user and improving the experience of the user are achieved.
In order to better understand the technical solutions, the technical solutions of the present invention are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present invention are detailed descriptions of the technical solutions of the present invention, and are not limitations of the technical solutions of the present invention, and the technical features in the embodiments and examples of the present invention may be combined with each other without conflict.
Example one
An embodiment of the present application provides an information processing method applied to an electronic device, where the electronic device is capable of displaying an image of a first object in a predetermined area. The electronic device may have a display unit, such as a smart phone, a tablet computer, or a computer connected to an oversized display screen; the electronic device may also be an electronic device without a display unit, and of course, other electronic devices may also be used, and the embodiments of the present application are not illustrated.
Referring to fig. 1, the information processing method in the present application includes the following steps:
s101: obtaining N images of the first object, wherein N is a positive integer;
s102: displaying a first display image corresponding to the first object in the predetermined area based on the N images;
s103: detecting and obtaining a first operation of a first user on the first display image;
s104: when the first operation is judged to be an operation in accordance with a preset rule, obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image, wherein i is a positive integer less than or equal to N;
s105: and responding to the first signal, acquiring the ith image with the ith image parameter in the N images corresponding to the ith sub-area, and displaying the ith image in the preset area.
When a user needs to browse other positions of the image, the electronic device can display the corresponding image according to the user need, and the specific implementation steps are as follows:
s106: detecting and obtaining a second operation of the first user on the ith image;
s107: when the second operation is judged to be the operation according with the preset rule, a second signal representing that the second operation corresponds to a jth sub-area in the ith image is obtained, wherein j is a positive integer less than or equal to N;
s108: and responding to the second signal, acquiring a jth image with a jth image parameter in the N images corresponding to the jth sub-area, and displaying the jth image in the preset area.
The specific implementation process of the method in the embodiment of the present application will be described below by taking an example in which the electronic device is a computer connected to an oversized touch-sensitive display screen.
First, the computer executes step S101, that is: obtaining N images of the first object, wherein N is a positive integer. At this time, the N images obtained by the computer may be obtained by an image acquisition module in the computer, such as a camera, or may be obtained by reading N images that are pre-stored in an image library and meet the requirements. The following description will be made in detail by using a computer to acquire an image through an attached image acquisition module.
Referring to fig. 2, assume that a user wants to display an image of a first object with a blue sky cloudiness and a ground tree shadow on a display screen. The actual contrast between the blue sky cloudiness of the bright part which can be identified by human eyes and the tree shadow of the dark tone part is that the dynamic range is far larger than the dynamic range which can be reached by image acquisition and display equipment. Therefore, in general, especially in the case of a bright portion and a dark portion having a large contrast, recording and displaying image information of a first object through one image inevitably causes information loss, which in turn causes a decrease in contrast and even a serious distortion of the displayed image compared to the actual object, and affects the viewing experience of the user. In order to ensure that the image information of the first object can be completely reserved and also consider the situation of tolerance limit of image acquisition equipment in the prior art, in the embodiment of the application, a plurality of images of the first object are acquired according to different acquisition parameters, so that the effect of completely reserving the image information of the first object is achieved.
The acquisition parameters may be different image parameters, such as an exposure value, a saturation, and the like, according to the needs of a user, which is not exemplified in the embodiments of the present application. Referring to fig. 3, the implementation steps are:
s301: and sequentially taking i as 1 to N, detecting the first object, and obtaining the ith acquisition parameter value corresponding to the ith sub-region in the first object.
S302: obtaining the ith image based on the ith acquisition parameter value; and when i is N, obtaining the N images.
The following will specifically describe the image acquisition process by taking the acquisition parameter as an exposure value.
Referring to fig. 2, specifically, for the first object whose image needs to be captured, the sensitization detection is first performed, and it is found that the first object can be divided into a first area 21 of a bright portion at the upper left and a second area 22 of a dark portion at the lower right according to the difference in brightness. The first area 21 is a sky area whose content is blue sky white cloud, and the second area 22 is a ground area whose content is tree shadow. And judging the respective accurate exposure values of the two areas according to the brightness of the two areas. It is assumed that at this time, the exact exposure value of the first region 21 is 0.5 and the exact exposure value of the second region 22 is 1.2.
Then, the image acquisition unit of the computer acquires the first object according to the two exposure values, and a first image with an exposure value of 0.5 and a second image with an exposure value of 1.2 are obtained. In the first image, the empty area is in a normal exposure state, and the ground area is in an abnormal display state; correspondingly, the sky area is in an abnormal exposure overexposure state in the second image, and the ground area is in a normal display state.
After obtaining two images of the first object divided into two regions, the computer performs step S102: and displaying a first display image corresponding to the first object in the preset area based on the N images.
Namely, the computer displays a first display image related to the first object on the oversized touch-control display screen. In this case, one of the acquired two images may be optionally displayed as the first display image, or one of the acquired high dynamic range images may be displayed as the first display image after the acquired two images are processed.
Then, the computer can collect the operation of the user through the collection module, and determine the area that the user wants to watch, that is, execute steps S103 and S104.
The computer can sense various types of operations of the user, for example, the user can select a viewing area by clicking the touch display screen, can judge the viewing area selected by the user by capturing the sight position of the user, and can adopt other modes.
The following specifically explains steps S103 and S104 in two ways of capturing the position of the user 'S line of sight and determining a specific region that the user needs to view according to the user' S enlargement operation on the local region of the image.
Referring to fig. 4, the specific steps of the first implementation are as follows:
s401: detecting and obtaining a gazing action of eyeballs of the first user;
s402: responding to the gazing action, and obtaining current gazing information corresponding to the gazing action, wherein the current gazing information comprises a current gazing position and a continuous gazing duration;
s403: when the continuous watching duration is greater than or equal to a first preset duration, determining the watching action as the operation meeting a preset rule;
s404: and obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image based on the corresponding relation between the current watching position and the N sub-areas of the first object.
Specifically, the computer has a sensing module, which can detect the eye movement of the user and record the information related to the eye movement of the user, such as the position and time of the eye movement.
When the sensing module detects the eyeball movement of the user and judges that the sight line of the user is watching the first display image of the first object in the sensing area, the watching action of the user is timed. When the duration of the fixation at the same position is longer than a preset time period, for example 0.5 seconds, the fixation motion is considered as an effective operation representing that the user wishes to browse the image of the area. And then, the computer judges the area to which the first fixation position belongs according to the corresponding relation of the first fixation position of the user in the first display image, which is detected by the sensing unit. And then determines that the user needs to display the area in the image. For example, referring to fig. 2, when the gaze position of the user is the first gaze position 211, the computer determines that the user needs to view the first area 21 because the first gaze position 211 is located in the first area 21. The computer then generates a first signal indicating that the user desires to display the first area 21.
Referring to fig. 5, the specific steps of the second implementation are as follows:
s501: detecting and obtaining the amplification operation of the first user on the first display image;
s502: responding to the amplification operation, and obtaining amplification operation information, wherein the amplification operation information comprises a current amplification position and a first ratio between the display area of an amplification area and the area of the preset area;
s503: when the first ratio is larger than a preset value, determining that the amplification operation is the operation in accordance with a preset rule;
s504: and obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image based on the corresponding relation between the current amplification position and the N sub-areas of the first object.
Specifically, the display unit of the computer is a touch display screen, which can detect the operation of the user on the touch screen and record the information related to the touch operation of the user, such as the information of the operation position.
When the touch display screen detects that the user performs amplification operation on the first position of the first display image, and detects the ratio of the display area of the amplification area of the user to the display area of the touch display screen. When the ratio is greater than a predetermined value, for example, 30%, it is considered to be enlarged as an effective operation representing that the user wishes to view the image of the area. And then, the computer judges the area to which the current amplification position belongs according to the corresponding relation of the current amplification position of the user in the first display image, which is detected by the touch display screen. And then determines that the user needs to display the area in the image. For example, referring to fig. 6, when the user's enlarged area is the first enlarged area 61, since the first enlarged area 61 is located in the first area 21, the computer determines that the user needs to view the first area 21. The computer then generates a first signal indicating that the user desires to display the first area 21.
After generating the first signal, the computer executes step S105: and responding to the first signal, acquiring the ith image with the ith image parameter in the N images corresponding to the ith sub-area, and displaying the ith image in the preset area.
That is, at this time, the computer selects the first image of the first region 21 that is accurately exposed, and replaces the originally displayed first display image to display to the user.
When the user needs to browse other positions of the image, the computer will continue to execute steps S106, S107, and S108.
Specifically, referring to fig. 2, after the user turns the line of sight to the second gaze position 221 and the duration of the gaze is greater than the preset time duration of 0.5 seconds, the computer determines that the user needs to watch the second area 22.
Alternatively, referring to fig. 7, when the user performs the zoom-in operation on the second position and the ratio between the display area of the zoom-in region 72 and the display area of the touch display screen is greater than the preset value of 30%, the computer determines that the user needs to view the second region 22.
The corresponding computer selects the second image accurately exposed in the second region 22, and replaces the originally displayed first image to display to the user.
Example two
On the other hand, based on the same inventive concept, refer to fig. 8. An embodiment of the present application provides an electronic device, where the electronic device is capable of displaying an image of a first object in a predetermined area, and the electronic device includes:
an obtaining module 81, configured to obtain N images of the first object, where N is a positive integer;
a first display module 821, configured to display a first display image corresponding to the first object in the predetermined area based on the N images;
the first sensing module 831 is configured to detect and obtain a first operation performed on the first display image by a first user;
a first determining module 841, configured to, when it is determined that the first operation is an operation in accordance with a predetermined rule, obtain a first signal indicating that the first operation corresponds to an ith sub-area in the first display image, where i is a positive integer less than or equal to N;
a second display module 822, configured to, in response to the first signal, obtain an ith image having an ith image parameter from the N images corresponding to the ith sub-area, and display the ith image in the predetermined area.
In an implementation manner of judging a specific area that a user needs to watch by capturing the sight line position of the user:
the first sensing module 831 specifically includes:
and the first sensing submodule is used for detecting and acquiring the gazing action of the eyeballs of the first user.
The first determining module 841 specifically includes:
the first judgment submodule is used for responding to the gazing action and obtaining current gazing information corresponding to the gazing action, wherein the current gazing information comprises a current gazing position and a continuous gazing duration;
the second judgment submodule is used for determining the gazing action as the operation which accords with a preset rule when the continuous gazing duration is greater than or equal to a first preset duration;
and the third judgment sub-module is used for acquiring a first signal representing that the first operation corresponds to the ith sub-region in the first display image based on the corresponding relation between the current gaze position and the N sub-regions of the first object.
In an implementation manner that a specific region which needs to be watched by a user is judged according to the amplification operation of the user on the local region of the image:
the first sensing module 831 specifically includes:
and the second sensing submodule is used for detecting and obtaining the amplification operation of the first user on the first display image.
The first determining module 841 specifically includes:
the fourth judgment submodule is used for responding to the amplification operation and obtaining amplification operation information, wherein the amplification operation information comprises a current amplification position and a first ratio between the display area of an amplification area and the area of the preset area;
a fifth judgment sub-module, configured to determine that the amplification operation is the operation that meets a predetermined rule when the first ratio is greater than a preset value;
and the sixth judgment submodule is used for acquiring a first signal representing that the first operation corresponds to the ith sub-area in the first display image based on the corresponding relation between the current amplification position and the N sub-areas of the first object.
Due to the need to obtain multiple images of the first object at different exposure values, the obtaining module 81 specifically comprises:
the first acquisition submodule is used for sequentially taking i as 1 to N, detecting the first object and acquiring the ith acquisition parameter value corresponding to the ith sub-area in the first object;
the second acquisition submodule is used for acquiring the ith image based on the ith acquisition parameter value;
and when i is N, obtaining the N images.
The first display module 821 specifically includes:
a first display sub-module, configured to display, in the predetermined area, a composite image that is synthesized based on the N images, where the composite image is the first display image; or
And the second display sub-module is used for displaying any one of the N images in the preset area, wherein the any one image is the first display image.
The electronic device further includes:
a second sensing module 832, configured to detect and obtain a second operation performed on the ith image by the first user;
a second determining module 842, configured to, when it is determined that the second operation is an operation that meets the predetermined rule, obtain a second signal that represents that the second operation corresponds to a jth sub-region in the ith image, where j is a positive integer less than or equal to N;
the third display module 823 is configured to, in response to the second signal, obtain a jth image with a jth image parameter from the N images corresponding to the jth sub-region, and display the jth image in the predetermined region.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
according to the technical scheme in the embodiment of the application, the object to be shot is subjected to photosensitive detection in different areas, so that the accurate exposure value of each area is obtained, and the image under the accurate exposure value of the area is obtained. The image information of each area can be completely reserved. Since a plurality of images of the subject at different exposure values are finally obtained. Therefore, the obtained image information can completely cover the actual dynamic range in the shooting area, and the technical problem that the HDR processing in the prior art cannot completely cover the actual dynamic range in the shooting area can be solved. And further, the technical effects of improving the dynamic range of the image and retaining all detail information of the image are achieved.
Furthermore, the technical scheme in the embodiment of the application is that the image display details in the sight line range of the user are rich by simulating human eyes. Therefore, the technical problem that the display effect of each area of the first object is different from the real effect of each area on the first object in the prior art can be solved. And realizing the technical effect that the visual effect of each area is basically consistent with the real effect of the area. Therefore, the effects of improving the visual experience of the user and improving the experience of the user are achieved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Specifically, the computer program instructions corresponding to the image obtaining and image processing method in the embodiment of the present application may be stored on a storage medium such as an optical disc, a hard disc, a usb disk, or the like, and when the computer program instructions corresponding to the information processing method in the storage medium are read or executed by an electronic device, the method includes the following steps:
obtaining N images of the first object, wherein N is a positive integer;
displaying a first display image corresponding to the first object in the predetermined area based on the N images;
detecting and obtaining a first operation of a first user on the first display image;
when the first operation is judged to be an operation in accordance with a preset rule, obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image, wherein i is a positive integer less than or equal to N;
and responding to the first signal, acquiring the ith image with the ith image parameter in the N images corresponding to the ith sub-area, and displaying the ith image in the preset area.
Optionally, the step of storing in the storage medium: the detecting and obtaining a first operation of a first user on the first display image and the corresponding computer program instruction specifically comprise the following steps:
and detecting and obtaining the gaze action of the eyeball of the first user.
Optionally, the step of storing in the storage medium: when the first operation is determined to be an operation in accordance with a predetermined rule, obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image, and a corresponding computer program instruction, specifically comprising the following steps:
responding to the gazing action, and obtaining current gazing information corresponding to the gazing action, wherein the current gazing information comprises a current gazing position and a continuous gazing duration;
when the continuous watching duration is greater than or equal to a first preset duration, determining the watching action as the operation meeting a preset rule;
and obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image based on the corresponding relation between the current watching position and the N sub-areas of the first object.
Optionally, the step of storing in the storage medium: the detecting and obtaining a first operation of a first user on the first display image and the corresponding computer program instruction specifically comprise the following steps:
and detecting and obtaining the amplification operation of the first user on the first display image.
Optionally, the step of storing in the storage medium: when the first operation is determined to be an operation in accordance with a predetermined rule, obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image, and a corresponding computer program instruction, specifically comprising the following steps:
responding to the amplification operation, and obtaining amplification operation information, wherein the amplification operation information comprises a current amplification position and a first ratio between the display area of an amplification area and the area of the preset area;
when the first ratio is larger than a preset value, determining that the amplification operation is the operation in accordance with a preset rule;
and obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image based on the corresponding relation between the current amplification position and the N sub-areas of the first object.
Optionally, the step of storing in the storage medium: when the ith image parameter is an ith acquisition parameter value and when the ith acquisition parameter value is the ith acquisition parameter value, an ith sub-region in the ith image is in an accurate acquisition state, and the N images of the first object are obtained, the corresponding computer program instruction specifically comprises the following steps:
sequentially taking i as 1 to N, detecting the first object, and obtaining the ith acquisition parameter value corresponding to the ith sub-region in the first object;
obtaining the ith image based on the ith acquisition parameter value;
and when i is N, obtaining the N images.
Optionally, the step of storing in the storage medium: the computer program instructions corresponding to displaying a first display image corresponding to the first object in the predetermined area based on the N images specifically include the steps of:
displaying a composite image synthesized based on the N images in the preset area, wherein the composite image is the first display image; or
And displaying any one image in the N images in the preset area, wherein the any one image is the first display image.
Optionally, the step of storing in the storage medium: in response to the first signal, acquiring an ith image having an ith image parameter from the N images corresponding to the ith sub-region, and after the ith image is displayed in the predetermined region, the corresponding computer program instructions specifically include the following steps:
detecting and obtaining a second operation of the first user on the ith image;
when the second operation is judged to be the operation according with the preset rule, a second signal representing that the second operation corresponds to a jth sub-area in the ith image is obtained, wherein j is a positive integer less than or equal to N;
and responding to the second signal, acquiring a jth image with a jth image parameter in the N images corresponding to the jth sub-area, and displaying the jth image in the preset area.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (16)

1. An information processing method applied to an electronic device capable of displaying an image of a first object in a predetermined area, the method comprising:
obtaining N images of the first object according to different acquisition parameters, wherein N is a positive integer;
displaying a first display image corresponding to the first object in the predetermined area based on the N images;
detecting and obtaining a first operation of a first user on the first display image;
when the first operation is judged to be an operation in accordance with a preset rule, obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image, wherein i is a positive integer less than or equal to N;
and responding to the first signal, acquiring and reserving the ith image with the ith image parameter in the N images corresponding to the ith sub-area, and displaying the ith image in the preset area.
2. The method of claim 1, wherein the detecting obtaining a first operation performed on the first display image by a first user specifically comprises:
and detecting and obtaining the gaze action of the eyeball of the first user.
3. The method according to claim 2, wherein obtaining a first signal indicating that the first operation corresponds to an ith sub-area in the first display image when the first operation is determined to be an operation in accordance with a predetermined rule specifically comprises:
responding to the gazing action, and obtaining current gazing information corresponding to the gazing action, wherein the current gazing information comprises a current gazing position and a continuous gazing duration;
when the continuous watching duration is greater than or equal to a first preset duration, determining the watching action as the operation meeting a preset rule;
and obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image based on the corresponding relation between the current watching position and the N sub-areas of the first object.
4. The method of claim 1, wherein the detecting obtaining a first operation performed on the first display image by a first user specifically comprises:
and detecting and obtaining the amplification operation of the first user on the first display image.
5. The method according to claim 4, wherein obtaining a first signal indicating that the first operation corresponds to an ith sub-area in the first display image when the first operation is determined to be an operation in accordance with a predetermined rule specifically comprises:
responding to the amplification operation, and obtaining amplification operation information, wherein the amplification operation information comprises a current amplification position and a first ratio between the display area of an amplification area and the area of the preset area;
when the first ratio is larger than a preset value, determining that the amplification operation is the operation in accordance with a preset rule;
and obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image based on the corresponding relation between the current amplification position and the N sub-areas of the first object.
6. The method according to any one of claims 1 to 5, wherein when the ith image parameter is the ith acquisition parameter value and when the ith acquisition parameter value is the ith acquisition parameter value, the ith sub-region in the ith image is in an accurate acquisition state, and the obtaining of the N images of the first object specifically comprises:
sequentially taking i as 1 to N, detecting the first object, and obtaining the ith acquisition parameter value corresponding to the ith sub-region in the first object;
obtaining the ith image based on the ith acquisition parameter value;
and when i is N, obtaining the N images.
7. The method according to claim 1, wherein the displaying a first display image corresponding to the first object in the predetermined area based on the N images specifically comprises:
displaying a composite image synthesized based on the N images in the preset area, wherein the composite image is the first display image; or
And displaying any one image in the N images in the preset area, wherein the any one image is the first display image.
8. The method of claim 1, wherein after acquiring an ith image having an ith image parameter from the N images corresponding to the ith sub-region in response to the first signal and displaying the ith image in the predetermined region, the method further comprises:
detecting and obtaining a second operation of the first user on the ith image;
when the second operation is judged to be the operation according with the preset rule, a second signal representing that the second operation corresponds to a jth sub-area in the ith image is obtained, wherein j is a positive integer less than or equal to N;
and responding to the second signal, acquiring a jth image with a jth image parameter in the N images corresponding to the jth sub-area, and displaying the jth image in the preset area.
9. An electronic device capable of displaying an image of a first object within a predetermined area, the electronic device comprising:
the acquisition module is used for acquiring N images of the first object according to different acquisition parameters, wherein N is a positive integer;
a first display module, configured to display a first display image corresponding to the first object in the predetermined area based on the N images;
the first sensing module is used for detecting and obtaining a first operation of a first user on the first display image;
the first judging module is used for obtaining a first signal representing that the first operation corresponds to the ith sub-area in the first display image when the first operation is judged to be the operation in accordance with a preset rule, wherein i is a positive integer less than or equal to N;
and the second display module is used for responding to the first signal, acquiring and reserving the ith image with the ith image parameter in the N images corresponding to the ith sub-area, and displaying the ith image in the preset area.
10. The electronic device of claim 9, wherein the first sensing module specifically comprises:
and the first sensing submodule is used for detecting and acquiring the gazing action of the eyeballs of the first user.
11. The electronic device of claim 10, wherein the first determining module specifically includes:
the first judgment submodule is used for responding to the gazing action and obtaining current gazing information corresponding to the gazing action, wherein the current gazing information comprises a current gazing position and a continuous gazing duration;
the second judgment submodule is used for determining the gazing action as the operation which accords with a preset rule when the continuous gazing duration is greater than or equal to a first preset duration;
and the third judgment sub-module is used for acquiring a first signal representing that the first operation corresponds to the ith sub-region in the first display image based on the corresponding relation between the current gaze position and the N sub-regions of the first object.
12. The electronic device of claim 9, wherein the first sensing module specifically comprises:
and the second sensing submodule is used for detecting and obtaining the amplification operation of the first user on the first display image.
13. The electronic device of claim 12, wherein the first determining module specifically includes:
the fourth judgment submodule is used for responding to the amplification operation and obtaining amplification operation information, wherein the amplification operation information comprises a current amplification position and a first ratio between the display area of an amplification area and the area of the preset area;
a fifth judgment sub-module, configured to determine that the amplification operation is the operation that meets a predetermined rule when the first ratio is greater than a preset value;
and the sixth judgment submodule is used for acquiring a first signal representing that the first operation corresponds to the ith sub-area in the first display image based on the corresponding relation between the current amplification position and the N sub-areas of the first object.
14. The electronic device according to any one of claims 9 to 13, wherein when the ith image parameter is an ith acquisition parameter value, and when the ith acquisition parameter value is the ith acquisition parameter value, an ith sub-region in the ith image is in an accurate acquisition state, the obtaining module specifically includes:
the first acquisition submodule is used for sequentially taking i as 1 to N, detecting the first object and acquiring the ith acquisition parameter value corresponding to the ith sub-area in the first object;
the second acquisition submodule is used for acquiring the ith image based on the ith acquisition parameter value;
and when i is N, obtaining the N images.
15. The electronic device of claim 9, wherein the first display module specifically comprises:
a first display sub-module, configured to display, in the predetermined area, a composite image that is synthesized based on the N images, where the composite image is the first display image; or
And the second display sub-module is used for displaying any one of the N images in the preset area, wherein the any one image is the first display image.
16. The electronic device of claim 9, wherein the electronic device further comprises:
the second sensing module is used for detecting and obtaining second operation of the first user on the ith image;
a second judging module, configured to, when it is judged that the second operation is an operation that meets the predetermined rule, obtain a second signal that represents that the second operation corresponds to a jth sub-region in the ith image, where j is a positive integer less than or equal to N;
and the third display module is used for responding to the second signal, acquiring a jth image with a jth image parameter in the N images corresponding to the jth sub-area, and displaying the jth image in the preset area.
CN201410848640.8A 2014-12-29 2014-12-29 Information processing method and electronic equipment Active CN105812645B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410848640.8A CN105812645B (en) 2014-12-29 2014-12-29 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410848640.8A CN105812645B (en) 2014-12-29 2014-12-29 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105812645A CN105812645A (en) 2016-07-27
CN105812645B true CN105812645B (en) 2019-12-24

Family

ID=56420485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410848640.8A Active CN105812645B (en) 2014-12-29 2014-12-29 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105812645B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102104738A (en) * 2009-12-18 2011-06-22 三星电子株式会社 Multi-step exposed image acquisition method by electronic shutter and photographing apparatus using the same
CN102131051A (en) * 2010-12-28 2011-07-20 惠州Tcl移动通信有限公司 Image pick-up equipment and image acquisition method and device thereof
CN102685379A (en) * 2011-03-18 2012-09-19 卡西欧计算机株式会社 Image processing apparatus with function for specifying image quality, and method and storage medium
CN103002211A (en) * 2011-09-08 2013-03-27 奥林巴斯映像株式会社 Photographic device
CN103248822A (en) * 2013-03-29 2013-08-14 东莞宇龙通信科技有限公司 Focusing method of camera shooting terminal and camera shooting terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8606009B2 (en) * 2010-02-04 2013-12-10 Microsoft Corporation High dynamic range image generation and rendering
CN103314572B (en) * 2010-07-26 2016-08-10 新加坡科技研究局 Method and apparatus for image procossing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102104738A (en) * 2009-12-18 2011-06-22 三星电子株式会社 Multi-step exposed image acquisition method by electronic shutter and photographing apparatus using the same
CN102131051A (en) * 2010-12-28 2011-07-20 惠州Tcl移动通信有限公司 Image pick-up equipment and image acquisition method and device thereof
CN102685379A (en) * 2011-03-18 2012-09-19 卡西欧计算机株式会社 Image processing apparatus with function for specifying image quality, and method and storage medium
CN103002211A (en) * 2011-09-08 2013-03-27 奥林巴斯映像株式会社 Photographic device
CN103248822A (en) * 2013-03-29 2013-08-14 东莞宇龙通信科技有限公司 Focusing method of camera shooting terminal and camera shooting terminal

Also Published As

Publication number Publication date
CN105812645A (en) 2016-07-27

Similar Documents

Publication Publication Date Title
CN107409166B (en) Automatic generation of panning shots
JP6388673B2 (en) Mobile terminal and imaging method thereof
US20100111441A1 (en) Methods, components, arrangements, and computer program products for handling images
CN109348089A (en) Night scene image processing method, device, electronic equipment and storage medium
US9357127B2 (en) System for auto-HDR capture decision making
US20130021512A1 (en) Framing of Images in an Image Capture Device
KR20170134256A (en) Method and apparatus for correcting face shape
EP3110131B1 (en) Method for processing image and electronic apparatus therefor
CN106454079B (en) Image processing method and device and camera
KR20160045927A (en) Interactive screen viewing
CN111835982B (en) Image acquisition method, image acquisition device, electronic device, and storage medium
CN107690804B (en) Image processing method and user terminal
CN106254807B (en) Electronic device and method for extracting still image
CN111241872B (en) Video image shielding method and device
CN106598257A (en) Mobile terminal-based reading control method and apparatus
US20190130193A1 (en) Virtual Reality Causal Summary Content
US10789987B2 (en) Accessing a video segment
CN103543916A (en) Information processing method and electronic equipment
CN110971833B (en) Image processing method and device, electronic equipment and storage medium
US9214193B2 (en) Processing apparatus and method for determining and reproducing a number of images based on input path information
CN106851099B (en) A kind of method and mobile terminal of shooting
CN106851052B (en) Control method and electronic equipment
CN105812645B (en) Information processing method and electronic equipment
CN107105158B (en) Photographing method and mobile terminal
US20180205891A1 (en) Multi-camera dynamic imaging systems and methods of capturing dynamic images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant