CN108632543B - Image display method, image display device, storage medium and electronic equipment - Google Patents
Image display method, image display device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN108632543B CN108632543B CN201810254602.8A CN201810254602A CN108632543B CN 108632543 B CN108632543 B CN 108632543B CN 201810254602 A CN201810254602 A CN 201810254602A CN 108632543 B CN108632543 B CN 108632543B
- Authority
- CN
- China
- Prior art keywords
- image
- display
- shooting
- electronic equipment
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 239000002131 composite material Substances 0.000 claims abstract description 45
- 230000006870 function Effects 0.000 claims description 31
- 238000012545 processing Methods 0.000 claims description 7
- 238000012790 confirmation Methods 0.000 claims description 4
- 238000004891 communication Methods 0.000 description 8
- 241001465754 Metazoa Species 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 210000000056 organ Anatomy 0.000 description 4
- 210000000697 sensory organ Anatomy 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000001508 eye Anatomy 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 210000001331 nose Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/752—Contour matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2628—Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Abstract
The application discloses an image display method, an image display device, a storage medium and electronic equipment, wherein the image display method is applied to first electronic equipment and comprises the following steps: starting an image shooting function and acquiring a first image shot at present; receiving a second image shot by a second electronic device in the shooting process of the first image; determining a target shooting object in the second image; and generating the target shooting object on the first image to obtain a composite image, and displaying the composite image in a preview frame, so that a photo of a user at a different place is obtained without post-matting, and the method is simple and high in flexibility.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image display method and apparatus, a storage medium, and an electronic device.
Background
With the development of terminal technology, the functions that the terminal can support become more and more powerful. For example, the terminal has a camera so that a photographing function and the like can be supported.
In many scenarios, a user may take a picture using the terminal's capture function. For example, when the user goes to a tour or meets with a friend, the scene at that time can be recorded through the shooting function of the terminal, and at this time, the terminal stores the shot image in an album, so that when the user wants to recall the good time, the image can be viewed from the album. However, there is no good co-photographing mode for users in different places, and a common method is to take two photos of people separately, and to synthesize the people in the two photos into one image in a post-period manner by means of matting.
Disclosure of Invention
The embodiment of the application provides an image display method, an image display device, a storage medium and electronic equipment, which can better finish the co-shooting of users at different places and have good shooting effect.
The embodiment of the application provides an image display method, which is applied to first electronic equipment and comprises the following steps:
starting an image shooting function and acquiring a first image shot at present;
receiving a second image shot by a second electronic device in the shooting process of the first image;
determining a target shooting object in the second image;
and generating the target shooting object on the first image to obtain a composite image, and displaying the composite image in a preview frame.
An embodiment of the present application further provides an image display apparatus, which is applied to a first electronic device, and includes:
the starting module is used for starting an image shooting function and acquiring a first image shot at present;
the receiving module is used for receiving a second image shot by second electronic equipment in the shooting process of the first image;
the determining module is used for determining a target shooting object in the second image;
and the generating module is used for generating the target shooting object on the first image to obtain a composite image, and displaying the composite image in a preview frame.
The embodiment of the application also provides a storage medium, wherein a plurality of instructions are stored in the storage medium, and the instructions are suitable for being loaded by a processor to execute any one of the image display methods.
An embodiment of the present application further provides an electronic device, which includes a processor and a memory, where the processor is electrically connected to the memory, the memory is used to store instructions and data, and the processor is used in any one of the steps of the image display method.
The application provides an image display method, an image display device, a storage medium and electronic equipment, which are applied to first electronic equipment, wherein an image shooting function is started, a first image shot at present is obtained, a second image shot by second electronic equipment is received in the shooting process of the first image, a target shot object in the second image is determined, then the target shot object is generated on the first image, a composite image is obtained, and the composite image is displayed in a preview frame, so that a photo of a user in a different place is not required to be obtained through post-stage image matting.
Drawings
The technical solution and other advantages of the present application will become apparent from the detailed description of the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic view of an application scenario of an image display system according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of an image display method according to an embodiment of the present application.
Fig. 3 is another schematic flow chart of an image display method according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a second image without a face contour according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a synthetic process of a synthetic image according to an embodiment of the present application.
Fig. 6 is a schematic diagram of a synthesizing process of another synthesized image according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of an image display device according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a generating module according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides an image display method, an image display device, a storage medium and electronic equipment.
Referring to fig. 1, the image display system may include any one of the image display apparatuses provided in the embodiments of the present application, and the image display apparatus may be integrated in a first electronic device and a second electronic device, where the first electronic device and the second electronic device may include a device with a shooting function, such as a smart phone and a tablet computer.
The method comprises the steps that first electronic equipment starts an image shooting function and obtains a first image shot at present; receiving a second image shot by a second electronic device in the shooting process of the first image; determining a target shooting object in the second image; the target subject is generated on the first image, a composite image is obtained, and the composite image is displayed in a preview frame.
For example, in fig. 1, the first electronic device and the second electronic device are both smart phones, which have a camera and a display screen, the camera is mainly used for capturing images, and the display screen is used for displaying the composite image. Specifically, when a user of the first electronic device clicks a 'group shot' button on the display interface, the user generates a camera starting instruction, the instruction is simultaneously transmitted to the camera of the user and the camera of the second electronic device to control the camera to start an image shooting function, the second electronic device transmits a second image shot by the second electronic device to the first electronic device in real time, and then the first electronic device performs synthesis processing on a first image shot by the user according to a target shot object in the second image to obtain a synthesized image.
As shown in fig. 2, fig. 2 is a schematic flowchart of an image display method provided in an embodiment of the present application, which is applied to a first electronic device, and a specific flowchart may be as follows:
101. and starting an image shooting function and acquiring a first image shot currently.
In this embodiment, the user may start the first electronic device to take an image by clicking a certain key, such as a "group photo" key, or by starting a different-place group photo function through voice or a designated gesture.
102. And receiving a second image shot by a second electronic device in the shooting process of the first image.
In this embodiment, the first electronic device is a master device, the second electronic device is a slave device, when a different-place co-shooting is required, the master device and the slave device establish a communication connection in advance, and then the master device starts a co-shooting function to shoot an image, at this time, the slave device also shoots the image at the same time, and transmits a second image shot by the slave device to the master device in real time.
103. And determining the target shooting object in the second image.
In this embodiment, the target subject may be a human or an animal.
For example, the step 103 may specifically include:
detecting whether a face contour exists in the second image;
if the face contour exists, the object with the face contour is determined as the target object.
In this embodiment, considering that although there is a difference between five sense organs of a human face or an animal face, the five sense organs all have organs such as a nose, eyes, ears, and a mouth, and the relative positions of these organs are relatively fixed, a face reference template may be generated using an image of the face of an existing person or animal, and then the face reference template may be used to match a second image, and if the matching is successful, it is determined that there is a face contour, and if the matching is unsuccessful, it is determined that there is no face contour.
104. The target subject is generated on the first image, a composite image is obtained, and the composite image is displayed in a preview frame.
For example, the step 104 may specifically include:
1-1, carrying out scaling processing on the second image according to the first image so as to enable the first image and the second image to have the same size.
In this embodiment, since the display screens of the first electronic device (master device) and the second electronic device (slave device) are not necessarily the same in size, there may be a size difference between the images captured by the two devices, and in order to determine the placement position of the target photographic subject, the second image sent from the slave device needs to be scaled to be the same as the size of the image captured by the master device.
1-2, determining the display position of the target shooting object in the zoomed second image.
In this embodiment, the display position reflects the shooting position of the slave device user in front of the slave device lens, and since the first image and the zoomed second image are identical in size, the two images may adopt the same coordinate system, and at this time, the display position (i.e., the coordinate position) of the target shooting object in the second image may be the generation position of the target shooting object in the first image.
1-3, projecting the target photographic subject onto the first image according to the image position.
In this embodiment, the target object may be directly projected at a corresponding position of the first image, and at this time, there may be a case where the corresponding position of the first image is just the display position of the main device user, which causes the target object and the main device user to overlap on the first image, and for this case, the target object only needs to display one approximate outline, and does not need to be displayed completely, that is, the above steps 1 to 3 further may include:
detecting whether a portrait exists at the position of the image in the first image;
if the first image exists, acquiring a contour image of the target shooting object from the second image, and generating the contour image at the image position of the first image;
if not, the whole image of the target shooting object is obtained from the second image, and the whole image is generated at the image position of the first image.
In this embodiment, when the target photographic subject is projected and found to overlap with the person in the first image, the outline image of the target photographic subject may be obtained, and the outline image may be generated on the first image in a dotted line form to prompt the user to perform position adjustment in time. It should be noted that, in order to avoid the obvious size difference between the portrait in the first image and the projected target photographic subject, which causes discomfort, the size of the target photographic subject may be adjusted based on the size of the portrait in the first image, so that the size difference is kept within a certain error range.
It is easy to understand that, in order to facilitate the position adjustment of the two users, the composite image may be displayed not only in the preview frame of the first electronic device, but also in the preview frame of the second electronic device, that is, after the step 104, the image displaying method may further include:
generating a display instruction carrying the synthetic image, wherein the display instruction is used for indicating to display the synthetic image;
and sending the display instruction to the second electronic equipment.
In this embodiment, before the end of the different-place co-shooting, that is, when the user of the first electronic device does not click the shooting confirmation button, the second electronic device needs to transmit the image shot by the second electronic device to the first electronic device in real time, and also needs to receive and display the synthesized image returned by the first electronic device in real time, so that the user of the second electronic device can conveniently check the co-shooting condition and make a position adjustment in time.
In addition, in order to better realize the co-shooting at different places, in the shooting process, the first electronic device and the second electronic device can carry out voice communication, namely, the recording function is started simultaneously, and recorded audio is transmitted to the device of the other party in real time, so that communication between users of the two parties is facilitated.
It can be seen from the above description that, the image display method provided in this embodiment is applied to a first electronic device, and by starting an image shooting function and acquiring a currently shot first image, in the shooting process of the first image, receiving a second image shot by a second electronic device, determining a target shot in the second image, and then generating the target shot on the first image to obtain a composite image, and displaying the composite image in a preview frame, so that a group photo of a different-location user is obtained without post-matting.
In the present embodiment, a description will be made from the viewpoint of the image display apparatus, and in particular, a detailed description will be made taking an example in which the image display apparatus is integrated in the first electronic device and the second electronic device.
Referring to fig. 3, a specific flow of an image display method may be as follows:
201. the first electronic equipment generates a camera starting instruction, carries out image shooting according to the camera starting instruction, and simultaneously sends the camera starting instruction to the second electronic equipment so that the second electronic equipment carries out image shooting according to the camera starting instruction.
202. The first electronic equipment acquires a first image shot at present and receives a second image shot by the second electronic equipment.
For example, referring to fig. 1, when a user a of a first electronic device clicks a "co-shooting" button on a display interface, a camera start instruction is generated and transmitted to the camera of the first electronic device and the camera of a second electronic device simultaneously to control the cameras to start an image capturing function, and the second electronic device transmits an image captured by the second electronic device to the first electronic device in real time.
203. The first electronic equipment detects whether a face contour exists in the second image, if yes, the object with the face contour is determined as a target object, and if not, the object is detected again.
For example, when the user B of the second electronic device takes an image, the camera may acquire only a partial image of the user B due to an incorrect standing position, or even fails to acquire the image of the user B, specifically referring to fig. 4, in both cases, the first electronic device may not detect the face contour, and the first electronic device may not synthesize the second image without the face contour.
204. The first electronic equipment performs scaling processing on the second image according to the first image to enable the first image and the second image to have the same size, and then determines the display position of the target shooting object in the scaled second image.
For example, when the sizes of the first image and the second image are not the same, a same corner (e.g., a lower left corner or a lower right corner) of the first image and the second image may be overlapped, and the second image may be scaled by using four sides of the first image as a reference standard, so that the two images are completely overlapped.
205. The first electronic device detects whether a portrait exists at the position of the image in the first image, and if so, performs step 206, and if not, performs step 207.
206. The first electronic device acquires a contour image of the target photographic subject from the second image, and generates the contour image at the image position of the first image to obtain a composite image.
For example, referring to fig. 5, the standing position of the user B in the second image may be in the middle, and if the user a also stands in the middle in the first image, a contour image of the user B may be generated in the middle of the first image, the contour image may be represented by a bright color, such as red, and a line of the contour image may be a dotted line or another easily distinguishable line, and the thickness of the line may be set manually.
207. The first electronic device acquires an overall image of the target photographic subject from the second image, and generates the overall image at the image position of the first image to obtain a composite image.
For example, referring to fig. 6, if the first image is a landscape photograph or user a is standing in a middle left position, the entire image of user B may be generated directly in the middle of the first image.
208. The first electronic device displays the composite image in a preview frame, generates a display instruction carrying the composite image, and then sends the display instruction to the second electronic device, wherein the display instruction is used for indicating the composite image to be displayed.
For example, after the composite image is generated, the first electronic device and the preview box of the electronic device display the composite image simultaneously, so that the user a and the user B can flexibly adjust the self-standing position according to the display condition of the composite image.
As can be seen from the above description, the image display method provided by this embodiment is applied to a first electronic device and a second electronic device, where the first electronic device starts an image capturing function and obtains a currently captured first image, and during capturing of the first image, the second electronic device starts the image capturing function and sends a captured second image to the first electronic device, and then the first electronic device detects whether a face contour exists in the second image, and if so, determines a subject having the face contour as a target subject, and if not, re-detects the subject, and then performs a scaling process according to the first image and the second image to make the first image and the second image have the same size, and then determines a display position of the target subject in the scaled second image, and then detects whether a portrait exists at the image position in the first image, if the composite image exists, the contour image of the target shooting object is obtained from the second image, the contour image is generated at the image position of the first image, if the composite image does not exist, the whole image of the target shooting object is obtained from the second image, the whole image is generated at the image position of the first image, then the first electronic device displays the composite image in a preview frame, a display instruction carrying the composite image is generated, and then the display instruction is sent to the second electronic device and is used for indicating and displaying the composite image, so that the co-photograph of the different-place user is not needed to be obtained through post-matting.
According to the method described in the foregoing embodiment, the embodiment will be further described from the perspective of an image display apparatus, which may be specifically implemented as an independent entity, or may be implemented by being integrated in an electronic device, such as a terminal, where the terminal may include a mobile phone, a tablet computer, and the like.
Referring to fig. 7, fig. 7 specifically illustrates an image display device provided in an embodiment of the present application, which is applied to a first electronic device, and the image display device may include: the device comprises an initiating module 10, a receiving module 20, a determining module 30 and a generating module 40, wherein:
(1) start module 10
The starting module 10 is used for starting the image shooting function and acquiring the currently shot first image.
In this embodiment, the user may start the first electronic device to take an image by clicking a certain key, such as a "group photo" key, or by starting a different-place group photo function through voice or a designated gesture.
(2) Receiving module 20
And a receiving module 20, configured to receive a second image captured by a second electronic device during the capturing of the first image.
In this embodiment, the first electronic device is a master device, the second electronic device is a slave device, when a different-place co-shooting is required, the master device and the slave device establish a communication connection in advance, and then the master device starts a co-shooting function to shoot an image, at this time, the slave device also shoots the image at the same time, and transmits a second image shot by the slave device to the master device in real time.
(3) Determination module 30
And a determining module 30, configured to determine the target object in the second image.
In this embodiment, the target subject may be a human or an animal.
For example, the determining module 30 may be specifically configured to:
detecting whether a face contour exists in the second image;
if the face contour exists, the object with the face contour is determined as the target object.
In this embodiment, considering that although there is a difference between five sense organs of a human face or an animal face, the five sense organs all have organs such as a nose, eyes, ears, and a mouth, and the relative positions of these organs are relatively fixed, a face reference template may be generated using an image of the face of an existing person or animal, and then the face reference template may be used to match a second image, and if the matching is successful, it is determined that there is a face contour, and if the matching is unsuccessful, it is determined that there is no face contour.
(4) Generating module 40
And a generating module 40, configured to generate the target object on the first image, obtain a composite image, and display the composite image in the preview frame.
For example, referring to fig. 8, the generating module 40 may specifically include:
and a scaling sub-module 41, configured to perform scaling processing on the second image according to the first image, so that the first image and the second image have the same size.
In this embodiment, since the display screens of the first electronic device (master device) and the second electronic device (slave device) are not necessarily the same in size, there may be a size difference between the images captured by the two devices, and in order to determine the placement position of the target photographic subject, the second image sent from the slave device needs to be scaled to be the same as the size of the image captured by the master device.
And a determining sub-module 42 for determining a display position of the target photographic subject in the scaled second image.
In this embodiment, the display position reflects the shooting position of the slave device user in front of the slave device lens, and since the first image and the zoomed second image are identical in size, the two images may adopt the same coordinate system, and at this time, the display position (i.e., the coordinate position) of the target shooting object in the second image may be the generation position of the target shooting object in the first image.
A projection sub-module 43 for projecting the target subject onto the first image according to the display position.
In this embodiment, the target object may be directly projected at a corresponding position of the first image, and at this time, there may be a case where the corresponding position of the first image is just the display position of the main device user, which causes the target object and the main device user to overlap on the first image, and for this case, the target object only needs to display one approximate outline, and does not need to be displayed completely, that is, the projection sub-module 43 may specifically be configured to:
detecting whether a portrait exists at the position of the image in the first image;
if the first image exists, acquiring a contour image of the target shooting object from the second image, and generating the contour image at the image position of the first image;
if not, the whole image of the target shooting object is obtained from the second image, and the whole image is generated at the image position of the first image.
In this embodiment, when the target photographic subject is found to overlap with the person in the first image when being projected, the contour image of the target photographic subject may be acquired, and the same contour image may be generated on the first image in a dotted line form to prompt the user to perform position adjustment in time. It should be noted that, in order to avoid the obvious size difference between the portrait in the first image and the projected target photographic subject, which causes discomfort, the size of the target photographic subject may be adjusted based on the size of the portrait in the first image, so that the size difference is kept within a certain error range.
It is to be understood that, in order to facilitate the position adjustment of the users, the generating module 40 may be further configured to display the composite image not only in the preview frame of the first electronic device, but also in the preview frame of the second electronic device, that is, after generating the target object on the first image to obtain the composite image:
generating a display instruction carrying the synthetic image, wherein the display instruction is used for indicating to display the synthetic image;
and sending the display instruction to the second electronic equipment.
In this embodiment, before the end of the different-place co-shooting, that is, when the user of the first electronic device does not click the shooting confirmation button, the second electronic device needs to transmit the image shot by the second electronic device to the first electronic device in real time, and also needs to receive and display the synthesized image returned by the first electronic device in real time, so that the user of the second electronic device can conveniently check the co-shooting condition and make a position adjustment in time.
In addition, in order to better realize the co-shooting at different places, in the shooting process, the first electronic device and the second electronic device can carry out voice communication, namely, the recording function is started simultaneously, and recorded audio is transmitted to the device of the other party in real time, so that communication between users of the two parties is facilitated.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
It can be seen from the above description that, the image display apparatus provided in this embodiment is applied to a first electronic device, an image shooting function is started through a starting module 10, and a currently shot first image is obtained, a receiving module 20 receives a second image shot by a second electronic device in a shooting process of the first image, a determining module 30 determines a target shot in the second image, and a generating module 40 generates the target shot on the first image to obtain a composite image, and displays the composite image in a preview frame, so that a photo of a different user is not required to be obtained through post-matting.
In addition, the embodiment of the application also provides electronic equipment which can be equipment such as a smart phone and a tablet computer. As shown in fig. 9, the electronic device 900 includes a processor 901, a memory 902, a display 903, and a control circuit 904. The processor 901 is electrically connected to the memory 902, the display 903, and the control circuit 904.
The processor 901 is a control center of the electronic device 900, connects various parts of the whole electronic device by using various interfaces and lines, executes various functions of the electronic device and processes data by running or loading an application program stored in the memory 902 and calling the data stored in the memory 902, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 901 in the electronic device 900 loads instructions corresponding to processes of one or more application programs into the memory 902 according to the following steps, and the processor 901 runs the application programs stored in the memory 902, so as to implement various functions:
starting an image shooting function and acquiring a first image shot at present;
receiving a second image shot by a second electronic device in the shooting process of the first image;
determining a target shooting object in the second image;
the target subject is generated on the first image, a composite image is obtained, and the composite image is displayed in a preview frame.
The display 903 may be used to display information input by or provided to the user as well as various graphical user interfaces of the terminal, which may be comprised of images, text, icons, video, and any combination thereof.
The control circuit 904 is electrically connected to the display 903, and is configured to control the display 903 to display information.
In some embodiments, as shown in fig. 9, the electronic device 900 further comprises: a radio frequency circuit 905, an input unit 906, an audio circuit 907, a sensor 908, and a power supply 909. The processor 901 is electrically connected to the rf circuit 905, the input unit 906, the audio circuit 907, the sensor 908, and the power source 909.
The radio frequency circuit 905 is configured to receive and transmit radio frequency signals, so as to establish wireless communication with a network device or other electronic devices through wireless communication, and receive and transmit signals with the network device or other electronic devices.
The input unit 906 may be used to receive input numbers, character information, or user characteristic information (e.g., a fingerprint), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control. The input unit 906 may include a fingerprint recognition module.
The audio circuit 907 may provide an audio interface between the user and the terminal through a speaker, microphone, or the like.
The electronic device 900 may also include at least one sensor 908, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or the backlight when the terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here.
The power supply 909 is used to supply power to the various components of the electronic device 900. In some embodiments, the power source 909 may be logically connected to the processor 901 through a power management system, so that functions of managing charging, discharging, and power consumption management are realized through the power management system.
Although not shown in fig. 9, the electronic device 900 may further include a camera, a bluetooth module, etc., which are not described in detail herein.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor. To this end, embodiments of the present invention provide a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute steps in any one of the image display methods provided by the embodiments of the present invention.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any image display method provided in the embodiment of the present invention, the beneficial effects that can be achieved by any image display method provided in the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
In summary, although the present application has been described with reference to the preferred embodiments, the above-described preferred embodiments are not intended to limit the present application, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present application, so that the scope of the present application shall be determined by the appended claims.
Claims (6)
1. An image display method applied to a first electronic device, comprising:
when receiving an operation of triggering shooting, generating a camera starting instruction, transmitting the camera starting instruction to second electronic equipment so as to simultaneously control the first electronic equipment and the second electronic equipment to start an image shooting function and acquire a first image currently shot by the first electronic equipment;
in the shooting process of the first image, when a user of the first electronic equipment does not click a shooting confirmation button, receiving a second image shot by second electronic equipment in real time;
determining a target shooting object in the second image;
carrying out scaling processing on the second image according to the first image to enable the first image and the second image to have the same size;
determining the display position of the target shooting object in the zoomed second image;
taking a display position of the target shooting object in the zoomed second image as a generation position of the target shooting object in the first image, wherein the display position and the generation position reflect a shooting position of the target shooting object in front of a lens of a second electronic device;
detecting whether a portrait exists at the generation position in the first image; if the first image exists, acquiring a contour image of the target shooting object from the second image, and generating the contour image at the generating position of the first image to prompt a user to adjust the position; if the second image does not exist, acquiring an overall image of the target shooting object from the second image, and generating the overall image at the generation position of the first image;
obtaining a composite image, and displaying the composite image in a preview frame;
generating a display instruction carrying the synthetic image, wherein the display instruction is used for indicating to display the synthetic image;
and sending the display instruction to the second electronic equipment to instruct a user of the second electronic equipment to adjust the position of the target shooting object in the composite image in real time.
2. The image display method according to claim 1, wherein the determining the target subject in the second image includes:
detecting whether a face contour exists in the second image;
and if the face contour exists, determining the object with the face contour as a target object.
3. An image display device applied to a first electronic device, comprising:
the starting module is used for generating a camera starting instruction when receiving an operation of triggering shooting, transmitting the camera starting instruction to the second electronic equipment so as to simultaneously control the first electronic equipment and the second electronic equipment to start an image shooting function and acquire a first image currently shot by the first electronic equipment;
the receiving module is used for receiving a second image shot by second electronic equipment in real time when a user of the first electronic equipment does not click a shooting confirmation button in the shooting process of the first image;
the determining module is used for determining a target shooting object in the second image;
a generating module, configured to perform scaling processing on the second image according to the first image, so that the first image and the second image have the same size, determine a display position of the target object in the scaled second image, use the display position of the target object in the scaled second image as a generation position of the target object in the first image, where the display position and the generation position reflect a shooting position of the target object in front of a lens of a second electronic device, detect whether a portrait exists at the generation position in the first image, if so, acquire a contour image of the target object from the second image, generate the contour image at the generation position of the first image, and if not, acquire an entire image of the target object from the second image, and generating the whole image at the generating position of the first image to obtain a composite image, displaying the composite image in a preview frame, generating a display instruction carrying the composite image, wherein the display instruction is used for indicating to display the composite image, and sending the display instruction to the second electronic equipment so as to indicate a user of the second electronic equipment to adjust the position of the target shooting object in the composite image in real time.
4. The image display device according to claim 3, wherein the determination module is specifically configured to:
detecting whether a face contour exists in the second image;
and if the face contour exists, determining the object with the face contour as a target object.
5. A storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the image display method of claim 1 or 2.
6. An electronic device comprising a processor and a memory, the processor being electrically connected to the memory, the memory being configured to store instructions and data, the processor being configured to perform the steps of the image display method of claim 1 or 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810254602.8A CN108632543B (en) | 2018-03-26 | 2018-03-26 | Image display method, image display device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810254602.8A CN108632543B (en) | 2018-03-26 | 2018-03-26 | Image display method, image display device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108632543A CN108632543A (en) | 2018-10-09 |
CN108632543B true CN108632543B (en) | 2020-07-07 |
Family
ID=63696369
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810254602.8A Expired - Fee Related CN108632543B (en) | 2018-03-26 | 2018-03-26 | Image display method, image display device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108632543B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109948562B (en) * | 2019-03-25 | 2021-04-30 | 浙江啄云智能科技有限公司 | Security check system deep learning sample generation method based on X-ray image |
CN110163829B (en) * | 2019-04-19 | 2021-07-13 | 北京沃东天骏信息技术有限公司 | Image generation method, device and computer readable storage medium |
CN110365907B (en) * | 2019-07-26 | 2021-09-21 | 维沃移动通信有限公司 | Photographing method and device and electronic equipment |
CN113489918A (en) * | 2020-10-28 | 2021-10-08 | 青岛海信电子产业控股股份有限公司 | Terminal device, server and virtual photo-combination method |
CN112702517B (en) * | 2020-12-24 | 2023-04-07 | 维沃移动通信(杭州)有限公司 | Display control method and device and electronic equipment |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1682528A (en) * | 2002-08-09 | 2005-10-12 | 夏普株式会社 | Image combination device, image combination method, image combination program, and recording medium containing the image combination program |
CN103716537A (en) * | 2013-12-18 | 2014-04-09 | 宇龙计算机通信科技(深圳)有限公司 | Photograph synthesizing method and terminal |
CN104967790A (en) * | 2014-08-06 | 2015-10-07 | 腾讯科技(北京)有限公司 | Photo shooting method, photo shooting apparatus and mobile terminal |
CN105391949A (en) * | 2015-10-29 | 2016-03-09 | 深圳市金立通信设备有限公司 | Photographing method and terminal |
CN105578028A (en) * | 2015-07-28 | 2016-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method and terminal |
CN106331529A (en) * | 2016-10-27 | 2017-01-11 | 广东小天才科技有限公司 | Image capturing method and apparatus |
CN106534618A (en) * | 2016-11-24 | 2017-03-22 | 广州爱九游信息技术有限公司 | Method, device and system for realizing pseudo field interpretation |
CN106657791A (en) * | 2017-01-03 | 2017-05-10 | 广东欧珀移动通信有限公司 | Method and device for generating synthetic image |
CN107404617A (en) * | 2017-07-21 | 2017-11-28 | 努比亚技术有限公司 | A kind of image pickup method and terminal, computer-readable storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100366059C (en) * | 2001-02-15 | 2008-01-30 | 英业达股份有限公司 | Image playing method and system |
US9916497B2 (en) * | 2015-07-31 | 2018-03-13 | Sony Corporation | Automated embedding and blending head images |
-
2018
- 2018-03-26 CN CN201810254602.8A patent/CN108632543B/en not_active Expired - Fee Related
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1682528A (en) * | 2002-08-09 | 2005-10-12 | 夏普株式会社 | Image combination device, image combination method, image combination program, and recording medium containing the image combination program |
CN103716537A (en) * | 2013-12-18 | 2014-04-09 | 宇龙计算机通信科技(深圳)有限公司 | Photograph synthesizing method and terminal |
CN104967790A (en) * | 2014-08-06 | 2015-10-07 | 腾讯科技(北京)有限公司 | Photo shooting method, photo shooting apparatus and mobile terminal |
CN105578028A (en) * | 2015-07-28 | 2016-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method and terminal |
CN105391949A (en) * | 2015-10-29 | 2016-03-09 | 深圳市金立通信设备有限公司 | Photographing method and terminal |
CN106331529A (en) * | 2016-10-27 | 2017-01-11 | 广东小天才科技有限公司 | Image capturing method and apparatus |
CN106534618A (en) * | 2016-11-24 | 2017-03-22 | 广州爱九游信息技术有限公司 | Method, device and system for realizing pseudo field interpretation |
CN106657791A (en) * | 2017-01-03 | 2017-05-10 | 广东欧珀移动通信有限公司 | Method and device for generating synthetic image |
CN107404617A (en) * | 2017-07-21 | 2017-11-28 | 努比亚技术有限公司 | A kind of image pickup method and terminal, computer-readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108632543A (en) | 2018-10-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108495032B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN108632543B (en) | Image display method, image display device, storage medium and electronic equipment | |
CN111541845B (en) | Image processing method and device and electronic equipment | |
WO2020238380A1 (en) | Panoramic photography method and terminal device | |
US9332208B2 (en) | Imaging apparatus having a projector with automatic photography activation based on superimposition | |
WO2021051995A1 (en) | Photographing method and terminal | |
CN109246360B (en) | Prompting method and mobile terminal | |
CN108881733B (en) | Panoramic shooting method and mobile terminal | |
CN105554372B (en) | Shooting method and device | |
JP2017532922A (en) | Image photographing method and apparatus | |
CN109474786B (en) | Preview image generation method and terminal | |
KR20160026251A (en) | Method and electronic device for taking a photograph | |
CN108924412B (en) | Shooting method and terminal equipment | |
WO2019165938A1 (en) | Image collection method and apparatus, mobile terminal and storage medium | |
CN109302632B (en) | Method, device, terminal and storage medium for acquiring live video picture | |
CN108335258B (en) | Image processing method and device of mobile terminal | |
CN110196673B (en) | Picture interaction method, device, terminal and storage medium | |
WO2022237839A1 (en) | Photographing method and apparatus, and electronic device | |
CN108924422B (en) | Panoramic photographing method and mobile terminal | |
WO2021238564A1 (en) | Display device and distortion parameter determination method, apparatus and system thereof, and storage medium | |
CN114009003A (en) | Image acquisition method, device, equipment and storage medium | |
CN111447365B (en) | Shooting method and electronic equipment | |
CN113763228A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN110086998B (en) | Shooting method and terminal | |
CN108881721A (en) | A kind of display methods and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200707 |