CN111756991A - Photographing method based on wearable device and wearable device - Google Patents

Photographing method based on wearable device and wearable device Download PDF

Info

Publication number
CN111756991A
CN111756991A CN201910383679.XA CN201910383679A CN111756991A CN 111756991 A CN111756991 A CN 111756991A CN 201910383679 A CN201910383679 A CN 201910383679A CN 111756991 A CN111756991 A CN 111756991A
Authority
CN
China
Prior art keywords
image
display screen
preview image
preview
wearable device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910383679.XA
Other languages
Chinese (zh)
Other versions
CN111756991B (en
Inventor
张腾飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201910383679.XA priority Critical patent/CN111756991B/en
Publication of CN111756991A publication Critical patent/CN111756991A/en
Application granted granted Critical
Publication of CN111756991B publication Critical patent/CN111756991B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The embodiment of the invention discloses a photographing method based on wearable equipment and the wearable equipment. The method comprises the following steps: when capturing a first preview image, a first shooting module displays the first preview image in a first preview frame on a first display screen; detecting whether the first preview image has a defective face image or not; if so, controlling the second display screen to rotate until the orientation of the first display screen is consistent with that of the second display screen; controlling the first shooting module and the second shooting module to capture images simultaneously so as to obtain a second preview image generated by combination; and displaying a second preview image in a second preview frame generated by combining the first display screen and the second display screen. Therefore, the embodiment of the invention can avoid distortion phenomena such as deformation or blurring of the shot image, and further improve the quality of the shot image.

Description

Photographing method based on wearable device and wearable device
Technical Field
The invention relates to the technical field of wearable equipment, in particular to a photographing method based on wearable equipment and the wearable equipment.
Background
At present, when utilizing wearable equipment to shoot, if the scope that needs to shoot is great, when carrying out many people and shine jointly, generally through removing wearable equipment to control wearable equipment's the shooting module and remove and shoot, the concatenation generates and shoots the image, thereby reaches the purpose that enlarges the scope of shooing. However, although the shooting range can be enlarged by such a mobile shooting method, the wearable device is easy to shake during moving, so that distortion phenomena such as deformation and blur are easy to occur in a shot image obtained by shooting, and the quality of the shot image is not high.
Disclosure of Invention
The embodiment of the invention discloses a photographing method based on wearable equipment and the wearable equipment, which can avoid distortion phenomena such as deformation or blurring of a photographed image and improve the quality of the photographed image.
The embodiment of the invention discloses a photographing method based on wearable equipment in a first aspect, wherein the wearable equipment comprises a first display screen provided with a first photographing module and a second display screen provided with a second photographing module, the second display screen can rotate, and the method comprises the following steps:
when the first shooting module captures a first preview image, displaying the first preview image in a first preview frame on the first display screen;
detecting whether the first preview image has a defective face image or not;
if so, controlling the second display screen to rotate until the orientation of the first display screen is consistent with that of the second display screen;
controlling the first shooting module and the second shooting module to capture images simultaneously so as to obtain a second preview image generated by combination;
and displaying the second preview image in a second preview frame generated by combining the first display screen and the second display screen.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the displaying the second preview image in a second preview frame generated by combining the first display screen and the second display screen, the method further includes:
identifying the second preview image to obtain image features of the second preview image;
judging whether target image features matched with preset dangerous object features exist in the image features;
and if the target image characteristics exist, outputting alarm prompt information to prompt the user to get away from dangerous objects.
As an optional implementation manner, in the first aspect of this embodiment of the present invention, the method further includes:
if the target image characteristics do not exist, detecting whether a decoration instruction is received;
if the decoration instruction is received, acquiring a virtual decoration matched with the image characteristics;
determining a coordinate position of the image feature in the second preview image;
generating a sticker image according to the virtual ornament and the coordinate position;
compositing the second preview image and the sticker image to generate a third preview image;
displaying the third preview image in the second preview frame.
As an optional implementation manner, in the first aspect of this embodiment of the present invention, the method further includes:
if the target image feature does not exist, acquiring the current position of the wearable device;
if the current position is matched with the position of a preset scenic spot, acquiring the tour information of the current position;
determining a target sight spot according to the image characteristics;
finding out the sight spot information of the target sight spot from the tour information;
and outputting the sight spot information.
As an optional implementation manner, in the first aspect of the embodiments of the present invention, the controlling the first shooting module and the second shooting module to perform image capturing simultaneously to obtain a second preview image generated by combining includes:
controlling the first shooting module and the second shooting module to capture images simultaneously so as to obtain an original combined image generated by combination;
detecting light rays in the current environment to obtain light ray intensity information of the current environment;
determining a target filter according to the light intensity information;
rendering the original combined image with the target filter to generate a second preview image.
The second aspect of the embodiments of the present invention discloses a wearable device, the wearable device includes a first display screen provided with a first shooting module and a second display screen provided with a second shooting module, the second display screen can rotate, the wearable device includes:
the display unit is used for displaying a first preview image in a first preview frame on the first display screen when the first shooting module captures the first preview image;
the first detection unit is used for detecting whether the first preview image has a incomplete face image or not;
the rotating unit is used for controlling the second display screen to rotate until the orientations of the first display screen and the second display screen are consistent when the first detection unit detects that the first preview image has the incomplete face image;
the control unit is used for controlling the first shooting module and the second shooting module to capture images simultaneously so as to obtain a second preview image generated by combination;
the display unit is further configured to display the second preview image in a second preview frame generated by combining the first display screen and the second display screen.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the wearable device further includes:
the identification unit is used for identifying the second preview image after the second preview image is displayed in a second preview frame generated by combining the first display screen and the second display screen so as to obtain the image characteristics of the second preview image;
the judging unit is used for judging whether a target image feature matched with a preset dangerous object feature exists in the image features;
and the first output unit is used for outputting alarm prompt information to prompt a user to get away from a dangerous object when the judging unit judges that the target image characteristics exist.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the wearable device further includes:
the second detection unit is used for detecting whether a decoration instruction is received or not when the judgment unit judges that the target image characteristic does not exist;
the first acquisition unit is used for acquiring the virtual ornament matched with the image characteristics when the decoration instruction is received;
a first determining unit configured to determine a coordinate position of the image feature in the second preview image;
a generating unit configured to generate a sticker image from the virtual ornament and the coordinate position;
a synthesizing unit configured to synthesize the second preview image and the sticker image to generate a third preview image;
the display unit is further configured to display the third preview image in the second preview frame.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the wearable device further includes:
a second obtaining unit, configured to obtain a current location of the wearable device when the determining unit determines that the target image feature is not present;
the second obtaining unit is further configured to obtain the tour information of the current location when the current location matches a location of a preset scenic spot;
the second determining unit is used for determining a target sight spot according to the image characteristics;
the searching unit is used for searching the sight spot information of the target sight spot from the tour information;
and the second output unit is used for outputting the sight spot information.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the control unit includes:
the control subunit is used for controlling the first shooting module and the second shooting module to capture images simultaneously so as to obtain an original combined image generated by combination;
the detection subunit is used for detecting the light in the current environment to obtain the light intensity information of the current environment;
the determining subunit is used for determining a target filter according to the light intensity information;
a rendering subunit, configured to render the original combined image using the target filter to generate a second preview image.
A third aspect of an embodiment of the present invention discloses a wearable device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the photographing method based on the wearable device disclosed by the first aspect of the embodiment of the invention.
A fourth aspect of the embodiments of the present invention discloses a computer-readable storage medium storing a computer program, where the computer program enables a computer to execute the photographing method based on a wearable device disclosed in the first aspect of the embodiments of the present invention.
A fifth aspect of embodiments of the present invention discloses a computer program product, which, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
A sixth aspect of the present embodiment discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where the computer program product is configured to, when running on a computer, cause the computer to perform part or all of the steps of any one of the methods in the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: wearable equipment is including being equipped with the first display screen of first shooting module and being equipped with the second display screen of second shooting module, and the rotation can take place for the second display screen. When the first shooting module captures a first preview image, displaying the first preview image in a first preview frame on a first display screen; detecting whether the first preview image has a defective face image or not; if so, controlling the second display screen to rotate until the orientation of the first display screen is consistent with that of the second display screen; controlling the first shooting module and the second shooting module to capture images simultaneously so as to obtain a second preview image generated by combination; and displaying a second preview image in a second preview frame generated by combining the first display screen and the second display screen. Therefore, by implementing the embodiment of the invention, when the incomplete face image exists in the first preview image captured by the first shooting module, the wearable device expands the shooting range by utilizing the second shooting module on the second display screen, does not need to carry out mobile shooting, can avoid distortion phenomena such as deformation or blur of the shot image, and further improves the quality of the shot image.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a photographing method based on a wearable device according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of another photographing method based on a wearable device according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another photographing method based on a wearable device according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a wearable device disclosed in the embodiment of the invention;
FIG. 5 is a schematic structural diagram of another wearable device disclosed in the embodiments of the present invention;
FIG. 6 is a schematic structural diagram of another wearable device disclosed in the embodiments of the present invention;
fig. 7 is a schematic structural diagram of another wearable device disclosed in the embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first", "second", "third" and "fourth" etc. in the description and claims of the present invention are used for distinguishing different objects, and are not used for describing a specific order. The terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a photographing method based on wearable equipment and the wearable equipment, which can avoid distortion phenomena such as deformation or blurring of a photographed image and improve the quality of the photographed image. The following detailed description is made with reference to the accompanying drawings.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a photographing method based on a wearable device according to an embodiment of the present invention. As shown in fig. 1, the wearable device-based photographing method may include the following steps.
101. When the first shooting module captures a first preview image, the wearable device displays the first preview image in a first preview frame on the first display screen.
In the embodiment of the invention, the wearable device comprises a first display screen provided with the first shooting module and a second display screen provided with the second shooting module, and the second display screen can rotate. In addition, the wearable device may include a portable device such as a smart watch and a smart bracelet, which is not limited in the embodiments of the present invention.
102. The wearable device detects whether the first preview image has a defective face image or not; if yes, executing step 103 to step 105; if not, the flow is ended.
For example, the incomplete face image refers to an incomplete face image, for example, due to the limited shooting range of the first shooting module, when multiple people take a picture, the faces of people at the edge may not completely enter the shooting range of the first shooting module, and then the incomplete face image appears.
103. The wearable device controls the second display screen to rotate to the first display screen and the second display screen in the same direction.
104. The wearable device controls the first shooting module and the second shooting module to capture images simultaneously so as to obtain a second preview image generated by combination.
In the embodiment of the invention, it can be understood that the shooting range of the first shooting module is different from that of the second shooting module, and the shooting range of the first shooting module and the shooting range of the second shooting module can have overlapped parts. For example, if the image captured by the first camera module includes a and B, and the image captured by the first camera module includes B and C, A, B and C are included in the combined second preview image.
105. The wearable device displays a second preview image in a second preview frame generated by the combination of the first display screen and the second display screen.
For example, there is the complete human face of first in the first preview image that first shooting module caught, the complete human face of second and the left face of third, third left face is incomplete face image, consequently, it rotates to the orientation unanimity of first display screen and second display screen to trigger the second display screen, and then first display screen and second display screen combination generate the second preview frame, first shooting module and second shooting module carry out image capture simultaneously, the scope of shooing has been enlarged, can have the complete human face of first in the second preview image that the combination generated, the complete human face of second and third complete human face.
As an optional implementation manner, after step 105, the following steps may be further included:
the wearable device receives a collaborative preview image sent by the collaborative photographing device; the wearable device judges whether the collaborative preview image and the second preview image meet a preset association relationship, if so, the wearable device sends a synchronous completion signal to the collaborative photographing device to trigger the collaborative photographing device to complete photographing; if not, adjusting the first shooting module of the wearable device and the shooting direction of the second shooting to enable the generated new second preview image and the collaborative preview image to meet the preset association relationship, and executing the step that the wearable device sends a synchronous completion signal to the collaborative shooting device to trigger the collaborative shooting device to complete the shooting while triggering the wearable device to complete the shooting.
It can be understood that the above-mentioned collaborative shooting device may be a device such as a smart watch, a smart phone, or a tablet computer, which has a shooting function, and the above-mentioned preset association relationship may be set manually according to a shooting experience.
As an optional implementation manner, before step 101, the wearable device-based photographing method may further include the following steps:
the wearable device obtains character style setting; after step 105, the wearable device identifies a face image in the second preview image to detect an expressive feature corresponding to the face image; and when detecting that the expression features corresponding to the face image are matched with the character style setting, triggering the wearable equipment to generate a shutter event so as to finish photographing.
Therefore, by implementing the embodiment, the wearable device intelligently performs intelligent snapshot according to the style setting of people, and the shooting function is more intelligent.
Therefore, by implementing the method described in fig. 1, the wearable device enlarges the shooting range by using the second shooting module on the second display screen, does not need to perform mobile shooting, can avoid distortion phenomena such as deformation or blur of the shot image, and further improves the quality of the shot image. In addition, wearable equipment shows the second in the second preview frame that the area enlarges, can improve the definition of preview image, and then promotes user's the experience of shooing.
Example two
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating another photographing method based on a wearable device according to an embodiment of the present invention. As shown in fig. 2, the wearable device-based photographing method may include the following steps.
201. When the first shooting module captures a first preview image, the wearable device displays the first preview image in a first preview frame on the first display screen.
In the embodiment of the invention, the wearable device comprises a first display screen provided with the first shooting module and a second display screen provided with the second shooting module, and the second display screen can rotate.
202. The wearable device detects whether the first preview image has a defective face image or not; if yes, go to step 203-step 212; if not, the flow is ended.
203. The wearable device controls the second display screen to rotate to the first display screen and the second display screen in the same direction.
204. The wearable device controls the first shooting module and the second shooting module to capture images simultaneously so as to obtain a second preview image generated by combination.
205. The wearable device displays a second preview image in a second preview frame generated by the combination of the first display screen and the second display screen.
206. The wearable device identifies the second preview image to obtain image features of the second preview image.
207. The wearable device judges whether target image features matched with preset dangerous object features exist in the image features or not; if yes, go to step 208; if not, go to step 209-step 212.
As an optional implementation manner, if the wearable device determines that there is no target image feature matching a preset dangerous object feature in the image features, the photographing method based on the wearable device may further include the following steps:
the wearable device acquires the current position of the wearable device; if the current position is matched with the position of a preset scenic spot, the wearable device acquires the tour information of the current position; the wearable device determines a target sight spot according to the image characteristics; the wearable device searches sight spot information of the target sight spot from the tour information; the wearable device outputs the sight information.
It can be understood that the format of the scene information may be an audio format, a video format, or a text format, and correspondingly, the mode of the wearable device outputting the scene information may be to play the scene information through an audio player, play the scene information through a video player, or display the scene information; it can be seen that, implementing this embodiment, when wearable equipment is located and predetermines the scenic spot, wearable equipment finds out which scenic spot that wearable equipment specifically is in the scenic spot according to the image characteristic, target scenic spot promptly, then provides the scenic spot information of target scenic spot for the user, can promote user experience.
208. The wearable device outputs alarm prompting information to prompt the user to get away from the dangerous object.
As can be seen, steps 206 to 208 are performed, and the wearable device detects a dangerous object by recognizing the second preview image and can issue an alarm prompt message when the dangerous object is detected.
As an optional implementation manner, the preset dangerous object characteristics may include a preset sound-sensitive dangerous object characteristic, a preset light-sensitive dangerous object characteristic and a preset general dangerous object characteristic, and step 208 may include:
if the target image characteristics are matched with the preset sound-sensitive dangerous object characteristics, the wearable equipment outputs alarm prompt information in a text form and/or a form of turning on an alarm lamp to prompt a user to get away from the dangerous object;
if the target image characteristics are matched with the preset photosensitive dangerous object characteristics, the wearable device outputs alarm prompt information in an audio form to prompt a user to get away from the dangerous object, and a light emitting module of the wearable device is turned off;
if the target image characteristics are matched with the preset common dangerous object characteristics, the wearable device outputs alarm prompt information in an audio mode, a text mode and/or a mode of turning on an alarm lamp to prompt the user to get away from the dangerous object.
By implementing the embodiment, the wearable device can change the mode of outputting the prompt information according to the type of the detected dangerous object, can avoid attracting the attention of the dangerous object, and has higher practicability and safety.
209. The wearable device detects whether a decoration instruction is received; if yes, go to step 210-step 212; if not, the flow is ended.
In an embodiment of the present invention, the decoration instruction may be used to instruct to add a decoration to the second preview image.
210. The wearable device obtains a virtual ornament that matches the image feature and determines a coordinate location of the image feature in the second preview image.
211. The wearable device generates a sticker image based on the virtual decoration and the coordinate position.
In the embodiment of the present invention, it can be understood that the size of the sticker image may be the same as the size of the second preview image, and virtual decorations are distributed in the sticker image at positions corresponding to the coordinate positions.
212. The wearable device synthesizes the second preview image and the sticker image to generate a third preview image, and displays the third preview image in the second preview frame.
In the embodiment of the present invention, it can be understood that, in the third preview image, each image feature is decorated with a virtual decoration correspondingly; therefore, step 209 to step 212 are implemented, so that the picture can be beautified by using the virtual decoration, and the photographing experience of the user is further improved.
Therefore, by implementing the method described in fig. 2, the wearable device enlarges the shooting range by using the second shooting module on the second display screen, does not need to perform mobile shooting, can avoid distortion phenomena such as deformation or blur of the shot image, and further improves the quality of the shot image. In addition, wearable equipment shows the second in the second preview frame that the area enlarges, can improve the definition of preview image, and then promotes user's the experience of shooing. In addition, the wearable device detects a dangerous object by recognizing the second preview image, and can issue alarm prompt information when the dangerous object is detected. In addition, wearable equipment can utilize virtual embellishment to beautify the picture, and then promotes user's the experience of shooing.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating another photographing method based on a wearable device according to an embodiment of the present invention. As shown in fig. 3, the wearable device-based photographing method may include the following steps.
301. When the first shooting module captures a first preview image, the wearable device displays the first preview image in a first preview frame on the first display screen.
In the embodiment of the invention, the wearable device comprises a first display screen provided with the first shooting module and a second display screen provided with the second shooting module, and the second display screen can rotate.
302. The wearable device detects whether the first preview image has a defective face image or not; if yes, executing step 303 to step 308; if not, the flow is ended.
303. The wearable device controls the second display screen to rotate to the first display screen and the second display screen in the same direction.
304. The wearable device controls the first shooting module and the second shooting module to capture images simultaneously so as to obtain original combined images generated by combination.
305. The wearable device detects light in the current environment to obtain light intensity information of the current environment.
306. The wearable device determines the target filter according to the light intensity information.
307. The wearable device renders the original combined image with the target filter to generate a second preview image.
In the embodiment of the invention, it can be understood that the target filter is matched with the light of the current environment, and the original combined image is rendered by using the target filter, so that the second preview image with better image effect can be generated.
308. The wearable device displays a second preview image in a second preview frame generated by the combination of the first display screen and the second display screen.
Therefore, by implementing the method described in fig. 3, the wearable device enlarges the shooting range by using the second shooting module on the second display screen, does not need to perform mobile shooting, can avoid distortion phenomena such as deformation or blur of the shot image, and further improves the quality of the shot image. In addition, wearable equipment shows the second in the second preview frame that the area enlarges, can improve the definition of preview image, and then promotes user's the experience of shooing. In addition, wearable equipment increases the filter effect for the image of catching according to the light of current environment, can intelligent carry out the image beautification.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of a wearable device according to an embodiment of the present invention. This wearable equipment is including being equipped with the first display screen of first shooting module and being equipped with the second display screen of second shooting module, and the rotation can take place for the second display screen. As shown in fig. 4, the wearable device may include:
the display unit 401 is configured to display a first preview image in a first preview frame on a first display screen when the first shooting module captures the first preview image;
a first detecting unit 402, configured to detect whether a first preview image has a defective face image;
a rotating unit 403, configured to control the second display screen to rotate until the first display screen and the second display screen are oriented in the same direction when the first detecting unit detects that the first preview image has the incomplete face image;
a control unit 404, configured to control the first shooting module and the second shooting module to capture images simultaneously, so as to obtain a second preview image generated by combination;
the display unit 401 is further configured to display a second preview image in a second preview frame generated by combining the first display screen and the second display screen.
As an optional implementation manner, the display unit 401 may be further configured to receive the collaborative preview image sent by the collaborative photographing apparatus after displaying the second preview image in a second preview frame generated by combining the first display screen and the second display screen; judging whether the collaborative preview image and a second preview image meet a preset association relationship, and after the judgment that the collaborative preview image and the second preview image meet the preset association relationship, sending a synchronous completion signal to the collaborative photographing device to trigger the collaborative photographing device to complete photographing while triggering the wearable device to complete photographing; and after the judgment that the preset association relation is not met, adjusting a first shooting module of the wearable device and the shooting direction of the second shooting so that the generated new second preview image and the collaborative preview image meet the preset association relation, and sending a synchronous completion signal to the collaborative shooting device to trigger the collaborative shooting device to complete shooting while triggering the wearable device to complete shooting.
It can be understood that the preset association relationship can be manually set according to the photographing experience, and in the embodiment, the cooperative photographing device is used for photographing at the same time, so that more photographing requirements of the user can be met, for example, a panoramic picture with a larger photographing range can be obtained.
As an optional implementation manner, the display unit 401 may be further configured to acquire a character style setting before displaying the second preview image in a second preview frame generated by combining the first display screen and the second display screen; after a second preview image is displayed in a second preview frame generated by combining the first display screen and the second display screen, identifying a face image in the second preview image so as to detect expression characteristics corresponding to the face image; and when detecting that the expression features corresponding to the face image are matched with the character style setting, triggering the wearable equipment to generate a shutter event so as to finish photographing.
Therefore, by implementing the embodiment, the wearable device intelligently performs intelligent snapshot according to the style setting of people, and the shooting function is more intelligent.
It can be seen that, implementing the wearable equipment described in fig. 4, the wearable equipment enlarges the shooting range through utilizing the second shooting module on the second display screen, need not to move and shoot, can avoid shooting the distortion phenomena such as deformation or blurring appear in the image, and then improve the quality of shooting the image. In addition, wearable equipment shows the second in the second preview frame that the area enlarges, can improve the definition of preview image, and then promotes user's the experience of shooing.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present invention. Wherein, the wearable device shown in fig. 5 is optimized by the wearable device shown in fig. 4. Compared to the wearable device shown in fig. 4, the wearable device shown in fig. 5 may further include:
an identifying unit 405, configured to identify the second preview image after displaying the second preview image in a second preview frame generated by combining the first display screen and the second display screen, so as to obtain an image feature of the second preview image;
a judging unit 406, configured to judge whether a target image feature matching a preset dangerous object feature exists in the image features;
a first output unit 407, configured to output alarm prompting information to prompt the user to get away from the dangerous object when the determination unit 406 determines that the target image feature exists.
As an optional implementation manner, the preset dangerous object characteristics may include a preset sound-sensitive dangerous object characteristic, a preset light-sensitive dangerous object characteristic, and a preset general dangerous object characteristic, and the manner for the first output unit 407 to output the alarm prompting message to prompt the user to get away from the dangerous object may specifically be:
a first output unit 407, configured to output an alarm prompt message in a form of text and/or turning on an alarm lamp to prompt a user to get away from a dangerous object when the target image feature matches a preset sound-sensitive dangerous object feature;
when the target image characteristics are matched with the preset photosensitive dangerous object characteristics, outputting alarm prompt information in an audio form to prompt a user to get away from a dangerous object, and turning off a light emitting module of the wearable device;
and when the target image characteristics are matched with the preset common dangerous object characteristics, outputting alarm prompt information in an audio form, a text form and/or a form of turning on an alarm lamp to prompt the user to get away from the dangerous object.
By implementing the embodiment, the wearable device can change the mode of outputting the prompt information according to the type of the detected dangerous object, can avoid attracting the attention of the dangerous object, and has higher practicability and safety.
A second detecting unit 408 configured to detect whether a decoration instruction is received when the judging unit 406 judges that the target image feature does not exist;
a first obtaining unit 409, configured to obtain a virtual decoration matched with the image feature when the decoration instruction is received;
a first determining unit 410, configured to determine a coordinate position of the image feature in the second preview image;
a generating unit 411 for generating a sticker image from the virtual decoration and the coordinate position;
a synthesizing unit 412 for synthesizing the second preview image and the sticker image to generate a third preview image;
the display unit 401 is further configured to display the third preview image in the second preview frame.
As an alternative embodiment, the wearable device shown in fig. 5 may further include:
a second acquiring unit 413 configured to acquire the current location of the wearable device when the determining unit 406 determines that the target image feature is not present;
a second obtaining unit 413, configured to obtain the tour information of the current location when the current location matches the location of the preset scenic spot;
a second determining unit 414, configured to determine a target sight point according to the image feature;
a searching unit 415, configured to search the tour information for finding the sight spot information of the target sight spot;
and a second output unit 416, configured to output the attraction information.
It can be understood that the format of the scene information may be an audio format, a video format, or a text format, and correspondingly, the mode of the wearable device outputting the scene information may be to play the scene information through an audio player, play the scene information through a video player, or display the scene information; it can be seen that, implementing this embodiment, when wearable equipment is located and predetermines the scenic spot, wearable equipment finds out which scenic spot that wearable equipment specifically is in the scenic spot according to the image characteristic, target scenic spot promptly, then provides the scenic spot information of target scenic spot for the user, can promote user experience.
It can be seen that, implementing the wearable equipment described in fig. 5, the wearable equipment enlarges the shooting range through utilizing the second shooting module on the second display screen, need not to move and shoot, can avoid shooting the distortion phenomena such as deformation or blurring appear in the image, and then improve the quality of shooting the image. In addition, wearable equipment shows the second in the second preview frame that the area enlarges, can improve the definition of preview image, and then promotes user's the experience of shooing. In addition, the wearable device detects a dangerous object by recognizing the second preview image, and can issue alarm prompt information when the dangerous object is detected. In addition, wearable equipment can utilize virtual embellishment to beautify the picture, and then promotes user's the experience of shooing.
EXAMPLE six
Referring to fig. 6, fig. 6 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present invention. The wearable device shown in fig. 6 is optimized by the wearable device shown in fig. 5. In comparison with the wearable device shown in fig. 5, in the wearable device shown in fig. 6, the control unit 404 may include:
the control subunit 4041 is configured to control the first shooting module and the second shooting module to capture images simultaneously, so as to obtain an original combined image generated by combination;
the detecting subunit 4042 is configured to detect light in the current environment to obtain light intensity information of the current environment;
a determining subunit 4043, configured to determine a target filter according to the light intensity information;
a rendering subunit 4044, configured to render the original combined image using the target filter to generate a second preview image.
It can be seen that, implementing the wearable equipment described in fig. 6, the wearable equipment enlarges the shooting range through utilizing the second shooting module on the second display screen, need not to move and shoot, can avoid shooting the distortion phenomena such as deformation or blurring appear in the image, and then improve the quality of shooting the image. In addition, wearable equipment shows the second in the second preview frame that the area enlarges, can improve the definition of preview image, and then promotes user's the experience of shooing. In addition, wearable equipment increases the filter effect for the image of catching according to the light of current environment, can intelligent carry out the image beautification.
EXAMPLE seven
Referring to fig. 7, fig. 7 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present invention. As shown in fig. 7, the wearable device may include:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
the processor 702 calls the executable program code stored in the memory 701 to execute any one of the photographing methods based on the wearable device in fig. 1 to 3.
The embodiment of the invention discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute any one of the photographing methods based on wearable equipment in figures 1-3.
Embodiments of the present invention also disclose a computer program product, wherein, when the computer program product is run on a computer, the computer is caused to execute part or all of the steps of the method as in the above method embodiments.
The embodiment of the present invention also discloses an application publishing platform, which is used for publishing a computer program product, wherein when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method in the above method embodiments.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The photographing method based on the wearable device and the wearable device disclosed by the embodiment of the invention are described in detail, a specific example is applied in the description to explain the principle and the implementation of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. The photographing method based on the wearable device is characterized in that the wearable device comprises a first display screen provided with a first photographing module and a second display screen provided with a second photographing module, the second display screen can rotate, and the method comprises the following steps:
when the first shooting module captures a first preview image, displaying the first preview image in a first preview frame on the first display screen;
detecting whether the first preview image has a defective face image or not;
if so, controlling the second display screen to rotate until the orientation of the first display screen is consistent with that of the second display screen;
controlling the first shooting module and the second shooting module to capture images simultaneously so as to obtain a second preview image generated by combination;
and displaying the second preview image in a second preview frame generated by combining the first display screen and the second display screen.
2. The method of claim 1, wherein after displaying the second preview image in a second preview frame generated by the combination of the first display screen and the second display screen, the method further comprises:
identifying the second preview image to obtain image features of the second preview image;
judging whether target image features matched with preset dangerous object features exist in the image features;
and if the target image characteristics exist, outputting alarm prompt information to prompt the user to get away from dangerous objects.
3. The method of claim 2, further comprising:
if the target image characteristics do not exist, detecting whether a decoration instruction is received;
if the decoration instruction is received, acquiring a virtual decoration matched with the image characteristics;
determining a coordinate position of the image feature in the second preview image;
generating a sticker image according to the virtual ornament and the coordinate position;
compositing the second preview image and the sticker image to generate a third preview image;
displaying the third preview image in the second preview frame.
4. The method of claim 2, further comprising:
if the target image feature does not exist, acquiring the current position of the wearable device;
if the current position is matched with the position of a preset scenic spot, acquiring the tour information of the current position;
determining a target sight spot according to the image characteristics;
finding out the sight spot information of the target sight spot from the tour information;
and outputting the sight spot information.
5. The method according to any one of claims 1 to 4, wherein the controlling the first photographing module and the second photographing module to perform image capturing simultaneously to obtain a second preview image generated by combination comprises:
controlling the first shooting module and the second shooting module to capture images simultaneously so as to obtain an original combined image generated by combination;
detecting light rays in the current environment to obtain light ray intensity information of the current environment;
determining a target filter according to the light intensity information;
rendering the original combined image with the target filter to generate a second preview image.
6. The utility model provides a wearable equipment, its characterized in that, wearable equipment is including the first display screen that is equipped with the first module of shooing and the second display screen that is equipped with the second and shoots the module, the rotation can take place for the second display screen, wearable equipment includes:
the display unit is used for displaying a first preview image in a first preview frame on the first display screen when the first shooting module captures the first preview image;
the first detection unit is used for detecting whether the first preview image has a incomplete face image or not;
the rotating unit is used for controlling the second display screen to rotate until the orientations of the first display screen and the second display screen are consistent when the first detection unit detects that the first preview image has the incomplete face image;
the control unit is used for controlling the first shooting module and the second shooting module to capture images simultaneously so as to obtain a second preview image generated by combination;
the display unit is further configured to display the second preview image in a second preview frame generated by combining the first display screen and the second display screen.
7. The wearable device of claim 6, further comprising:
the identification unit is used for identifying the second preview image after the second preview image is displayed in a second preview frame generated by combining the first display screen and the second display screen so as to obtain the image characteristics of the second preview image;
the judging unit is used for judging whether a target image feature matched with a preset dangerous object feature exists in the image features;
and the first output unit is used for outputting alarm prompt information to prompt a user to get away from a dangerous object when the judging unit judges that the target image characteristics exist.
8. The wearable device of claim 7, further comprising:
the second detection unit is used for detecting whether a decoration instruction is received or not when the judgment unit judges that the target image characteristic does not exist;
the first acquisition unit is used for acquiring the virtual ornament matched with the image characteristics when the decoration instruction is received;
a first determining unit configured to determine a coordinate position of the image feature in the second preview image;
a generating unit configured to generate a sticker image from the virtual ornament and the coordinate position;
a synthesizing unit configured to synthesize the second preview image and the sticker image to generate a third preview image;
the display unit is further configured to display the third preview image in the second preview frame.
9. The wearable device of claim 7, further comprising:
a second obtaining unit, configured to obtain a current location of the wearable device when the determining unit determines that the target image feature is not present;
the second obtaining unit is further configured to obtain the tour information of the current location when the current location matches a location of a preset scenic spot;
the second determining unit is used for determining a target sight spot according to the image characteristics;
the searching unit is used for searching the sight spot information of the target sight spot from the tour information;
and the second output unit is used for outputting the sight spot information.
10. The wearable device according to any of claims 6 to 9, wherein the control unit comprises:
the control subunit is used for controlling the first shooting module and the second shooting module to capture images simultaneously so as to obtain an original combined image generated by combination;
the detection subunit is used for detecting the light in the current environment to obtain the light intensity information of the current environment;
the determining subunit is used for determining a target filter according to the light intensity information;
a rendering subunit, configured to render the original combined image using the target filter to generate a second preview image.
CN201910383679.XA 2019-05-09 2019-05-09 Photographing method based on wearable device and wearable device Active CN111756991B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910383679.XA CN111756991B (en) 2019-05-09 2019-05-09 Photographing method based on wearable device and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910383679.XA CN111756991B (en) 2019-05-09 2019-05-09 Photographing method based on wearable device and wearable device

Publications (2)

Publication Number Publication Date
CN111756991A true CN111756991A (en) 2020-10-09
CN111756991B CN111756991B (en) 2021-11-05

Family

ID=72672704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910383679.XA Active CN111756991B (en) 2019-05-09 2019-05-09 Photographing method based on wearable device and wearable device

Country Status (1)

Country Link
CN (1) CN111756991B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106375660A (en) * 2016-09-13 2017-02-01 乐视控股(北京)有限公司 Photographic processing method and device
CN107835365A (en) * 2017-11-03 2018-03-23 上海爱优威软件开发有限公司 Auxiliary shooting method and system
CN108449457A (en) * 2018-04-12 2018-08-24 珠海格力电器股份有限公司 A kind of display device and mobile phone
CN109040595A (en) * 2018-08-27 2018-12-18 百度在线网络技术(北京)有限公司 History panorama processing method, device, equipment and storage medium based on AR
CN109120801A (en) * 2018-10-30 2019-01-01 Oppo(重庆)智能科技有限公司 A kind of method, device and mobile terminal of dangerous goods detection
CN109246360A (en) * 2018-11-23 2019-01-18 维沃移动通信(杭州)有限公司 A kind of reminding method and mobile terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106375660A (en) * 2016-09-13 2017-02-01 乐视控股(北京)有限公司 Photographic processing method and device
CN107835365A (en) * 2017-11-03 2018-03-23 上海爱优威软件开发有限公司 Auxiliary shooting method and system
CN108449457A (en) * 2018-04-12 2018-08-24 珠海格力电器股份有限公司 A kind of display device and mobile phone
CN109040595A (en) * 2018-08-27 2018-12-18 百度在线网络技术(北京)有限公司 History panorama processing method, device, equipment and storage medium based on AR
CN109120801A (en) * 2018-10-30 2019-01-01 Oppo(重庆)智能科技有限公司 A kind of method, device and mobile terminal of dangerous goods detection
CN109246360A (en) * 2018-11-23 2019-01-18 维沃移动通信(杭州)有限公司 A kind of reminding method and mobile terminal

Also Published As

Publication number Publication date
CN111756991B (en) 2021-11-05

Similar Documents

Publication Publication Date Title
US8582918B2 (en) Imaging device, image composition and display device, and image composition method
CN106385591B (en) Video processing method and video processing device
US9591364B2 (en) Image processing apparatus, image processing method, and program
WO2022116604A1 (en) Image captured image processing method and electronic device
CN104853091B (en) A kind of method taken pictures and mobile terminal
CN109040474B (en) Photo display method, device, terminal and storage medium
TWI779343B (en) Method of a state recognition, apparatus thereof, electronic device and computer readable storage medium
CN104159040B (en) Image pickup method and filming apparatus
WO2017114048A1 (en) Mobile terminal and method for identifying contact
CN107395957B (en) Photographing method and device, storage medium and electronic equipment
CN107623819B (en) A kind of method taken pictures and mobile terminal and related media production
CN108090491B (en) Video recording method, device and computer readable storage medium
CN114155322A (en) Scene picture display control method and device and computer storage medium
WO2020093798A1 (en) Method and apparatus for displaying target image, terminal, and storage medium
KR20130112578A (en) Appratus and method for providing augmented reality information based on user
KR101672691B1 (en) Method and apparatus for generating emoticon in social network service platform
CN113709545A (en) Video processing method and device, computer equipment and storage medium
CN115225923A (en) Gift special effect rendering method and device, electronic equipment and live broadcast server
CN112189330A (en) Shooting control method, terminal, holder, system and storage medium
WO2016011861A1 (en) Method and photographing terminal for photographing object motion trajectory
CN104869283B (en) A kind of image pickup method and electronic equipment
CN111756991B (en) Photographing method based on wearable device and wearable device
CN107566761B (en) Image processing method and electronic equipment
CN114625468B (en) Display method and device of augmented reality picture, computer equipment and storage medium
JP2012244226A (en) Imaging device, image composition method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant