CN110300264A - Image processing method, device, mobile terminal and storage medium - Google Patents
Image processing method, device, mobile terminal and storage medium Download PDFInfo
- Publication number
- CN110300264A CN110300264A CN201910579241.9A CN201910579241A CN110300264A CN 110300264 A CN110300264 A CN 110300264A CN 201910579241 A CN201910579241 A CN 201910579241A CN 110300264 A CN110300264 A CN 110300264A
- Authority
- CN
- China
- Prior art keywords
- image
- target object
- camera
- positional relationship
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/57—Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
This application discloses a kind of image processing method, device, mobile terminal and storage mediums, are related to technical field of mobile terminals.This method is applied to mobile terminal and obtains at least one first image this method comprises: mobile terminal shoots target scene by least one camera in multiple cameras;Target object is determined from least one first image, and obtains the spatial information of target object;The relative positional relationship of multiple cameras is adjusted based on spatial information, and target scene is shot simultaneously by adjusting multiple cameras after relative positional relationship, obtains multiple second images;Picture material compensation is carried out to target object based on preset algorithm and multiple second images, obtains multiple target object images.The application simultaneously shoots target scene with different perspectives by the multiple cameras adjusted of the spatial information according to object, to obtain the image at other visual angles, to improve picture quality, promotes display effect.
Description
Technical field
This application involves technical field of mobile terminals, eventually more particularly, to a kind of image processing method, device, movement
End and storage medium.
Background technique
With the development of science and technology, mobile terminal have become in people's daily life most common electronic product it
One.Also, user often passes through mobile terminal and takes pictures, and still, current shooting is only capable of obtaining the image of fixed angle, uses
The image that all angles are wanted to obtain in family then needs all to shoot all angles, cumbersome.
Summary of the invention
In view of the above problems, present applicant proposes a kind of image processing method, device, mobile terminal and storage medium,
To solve the above problems.
In a first aspect, the embodiment of the present application provides a kind of image processing method, it is applied to mobile terminal, the method packet
Include: the mobile terminal shoots target scene by least one camera in the multiple camera, obtains extremely
Few first image;Target object is determined from least one described first image, and obtains the space of the target object
Information;The relative positional relationship of the multiple camera is adjusted, and based on the spatial information by adjusting relative positional relationship
Multiple cameras afterwards simultaneously shoot the target scene, obtain multiple second images;Based on preset algorithm and institute
It states multiple second images and picture material compensation is carried out to the target object, obtain multiple target object images, the multiple mesh
The corresponding visual angle of mark subject image includes that the corresponding visual angle of the multiple second image and the multiple second image are corresponding
Visual angle other than visual angle.
Second aspect, the embodiment of the present application provide a kind of image processing apparatus, are applied to mobile terminal, described device packet
Include: shooting module, for the mobile terminal by least one camera in the multiple camera to target scene into
Row shooting, obtains at least one first image;Determining module, for determining object from least one described first image
Body, and obtain the spatial information of the target object;Module is adjusted, for adjusting the multiple camera shooting based on the spatial information
The relative positional relationship of head, and the target scene is clapped simultaneously by adjusting multiple cameras after relative positional relationship
It takes the photograph, obtains multiple second images;Picture material compensating module, for being based on preset algorithm and the multiple second image to institute
State target object and carry out picture material compensation, obtain multiple target object images, the multiple target object image it is corresponding
Visual angle includes the visual angle other than the corresponding visual angle of the multiple second image and the corresponding visual angle of the multiple second image.
The third aspect, the embodiment of the present application provide a kind of mobile terminal, including memory and processor, the memory
It is couple to the processor, the memory store instruction, the processor is held when executed by the processor
The row above method.
Fourth aspect, the embodiment of the present application provides a kind of computer-readable storage medium, described computer-readable
Program code is stored in storage medium, said program code can be called by processor and execute the above method.
The embodiment of the present application provides image processing method, device, mobile terminal and storage medium, and this method is applied to move
Dynamic terminal, this method comprises: mobile terminal shoots target scene by least one camera in multiple cameras,
Obtain at least one first image;Target object is determined from least one first image, and obtains the space letter of target object
Breath;It adjusts the relative positional relationship of multiple cameras based on spatial information, and is taken the photograph by adjusting multiple after relative positional relationship
As head while target scene is shot, obtains multiple second images;Based on preset algorithm and multiple second images to mesh
It marks object and carries out picture material compensation, obtain multiple target object images.The application according to the spatial information of object by adjusting
Multiple cameras afterwards simultaneously shoot target scene with different perspectives, to obtain the image at other visual angles, to improve
Picture quality promotes display effect.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for
For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 shows a kind of structural schematic diagram of mobile terminal provided by the embodiments of the present application;
Fig. 2 shows the structural schematic diagrams of another mobile terminal provided by the embodiments of the present application;
Fig. 3 shows the flow diagram of an image processing method provided by the embodiments of the present application;
Fig. 4 shows the flow diagram of another image processing method provided by the embodiments of the present application;
Fig. 5 shows the flow diagram of another image processing method provided by the embodiments of the present application;
Fig. 6 shows a kind of schematic diagram of operation of mobile terminal provided by the embodiments of the present application;
Fig. 7 shows the schematic diagram of another operation of mobile terminal provided by the embodiments of the present application;
Fig. 8 shows the module frame chart of image processing apparatus provided by the embodiments of the present application;
Fig. 9 shows the embodiment of the present application for executing the mobile terminal of the image processing method according to the embodiment of the present application
Block diagram;
Figure 10 shows realizing at according to the image of the embodiment of the present application for saving or carrying for the embodiment of the present application
The storage unit of the program code of reason method.
Specific embodiment
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application
Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described.
Currently, camera function has become the standard configuration of most mobile terminals, mobile terminal user can be carried around movement
Terminal simultaneously passes through the fine moment of the mobile terminal records at one's side, in addition, with the intelligentized fast development of mobile terminal, very much
User new, characteristic picture is required daily to show outward, mobile terminal user to the quality requirement of photo, take pictures
The requirement of mode capabilities is also higher and higher.But current user goes for the picture of target object all angles, needs to object
All angles all shot, when shooting, also needs to keep certain track, cumbersome for users and exist certain
Operation difficulty.Meanwhile user cannot determine how well during shooting and shoot available more satisfaction
Image, needing user to carry out multiple shooting just can determine that more satisfied image, cumbersome for users.
In view of the above-mentioned problems, inventor has found by long-term research, and propose image provided by the embodiments of the present application
Processing method, device, mobile terminal and storage medium, it is same by the multiple cameras adjusted of the spatial information according to object
When target scene is shot with different perspectives, to obtain the image at other visual angles, to improve picture quality, promote display
Effect.Wherein, specific image processing method is described in detail in subsequent embodiment.
Mobile terminal involved in this programme as shown in Figure 1, mobile terminal 100 may include multiple cameras 140 and
Terminal body 150, this multiple camera 140 is movably arranged in terminal body 150, and the view of this multiple camera 140
Angle is different, this multiple camera 140 can all be the front camera of mobile terminal, that is, is located at same with mobile terminal operation interface
The camera of side;It can also all be the rear camera of mobile terminal, that is, be located at the camera shooting with the mobile terminal operation interface back side
Head (as shown in Figure 1);It can be the rear camera of mobile terminal with a part, a part is the camera shooting positioned at other positions
Head, it is not limited here.Above-mentioned Fig. 1 is only schematic diagram, does not generate restriction to the mobile terminal that this programme is related to.Further
Ground, multiple camera can be according to the positions of the multiple cameras 140 of Mobile Spatial Information of object, as shown in Fig. 2, can edge
Vertical direction dollying head 140, it is to be understood that the structure of camera 140 shown in Fig. 2 be only illustrate, camera
140 can move along vertical direction, can also move along horizontal direction, can also move along specific direction, herein not
It limits.
Referring to Fig. 3, Fig. 3 shows the flow diagram of the image processing method of the application one embodiment offer.Institute
Image processing method is stated to be used for through the multiple cameras adjusted of the spatial information according to object simultaneously with different perspectives to mesh
Mark scene is shot, and to obtain the image at other visual angles, to improve picture quality, promotes display effect.Specific real
It applies in example, described image processing method is applied to image processing apparatus 200 as shown in Figure 8 and handles configured with described image
The mobile terminal 100 (Fig. 9) of device 200.It will illustrate the detailed process of the present embodiment by taking mobile terminal as an example below, it certainly, can
With understanding, mobile terminal applied by the present embodiment can include for smart phone, tablet computer, wearable mobile terminal etc.
The electronic equipment of camera does not do specific restriction herein.It will be explained in detail below for process shown in Fig. 3, institute
Stating image processing method can specifically include following steps:
Step S110: the mobile terminal by least one camera in the multiple camera to target scene into
Row shooting, obtains at least one first image.
In some embodiments, mobile terminal can be by least one camera in multiple cameras to target field
Scape is shot, and as a kind of mode, mobile terminal can be opened when the icon that user clicks software of taking pictures enters and takes pictures interface
It opens camera to shoot target scene, camera can also be then turned on to target after user selects preset photographing mode
Scene is shot, and further, in the present embodiment, mobile terminal can be opened when opening camera based on user's selection
At least one camera in multiple cameras is opened, can also be opened by system setting when opening preset photographing mode multiple
At least one camera in camera.Target object is shot by least one camera, to obtain at least
One the first image, wherein the first image can be the image that shooting is completed, and be also possible to preview image, it is not limited here.
Step S120: target object is determined from least one described first image, and obtains the sky of the target object
Between information.
In the present embodiment, at least one first image based on acquisition, mobile terminal can be based on image recognition technology,
Obtained at least one first image of shooting is identified, wherein can choose any at least one first image
One image is identified, can also be identified to the first image of each of at least one first image, and therefrom identify
One or more objects in the target scene.At the same time it can also determine object from the one or more objects identified
Body, wherein can be and determine target object by the operation of user, such as target is determined based on the clicking operation in operation interface
Object is also possible to identify the focal position in the first image of each of multiple first images, by the corresponding object in focal position
Body is determined as target object.
In some embodiments, target object is determined from least one first image, can also obtain target object
Spatial information.The spatial information of target object can include at least dimension information, depth information, the location information of target object
Deng it is not limited here.Wherein, the spatial information that target object is obtained based on the first image, can be based at least one the
First image in one image obtains the spatial information of target object, is also possible to choose from least one the first image
Two or more first images obtain the spatial information of target object.Specifically, as a kind of mode, it is with depth information
Example, can choose first image from least one first image, be carried out based on deep learning algorithm to first image
Depth prediction, to obtain the depth information of the target object in first image.It alternatively, can be based on extremely
The first image of any two is chosen in few first image, the space letter for obtaining target object based on two first images
Breath, can find the corresponding pixel of target object in two the first images, then according to three by using Stereo Matching Algorithm
Angular measurement principle calculates parallax information, and the depth information of target object can be obtained in conjunction with camera parameter for parallax information, together
Sample, multiple first images can also be chosen in being based at least one first image, so as to pass through multiple first images
To obtain the depth information of more accurate target object.
Step S130: adjusting the relative positional relationship of the multiple camera based on the spatial information, and by adjusting
Multiple cameras after relative positional relationship simultaneously shoot the target scene, obtain multiple second images.
In the present embodiment, the spatial information based on the target object got, available multiple camera is to mesh
The obtained image that mark object is shot, it is possible thereby to judge that current multiple cameras are captured under relative positional relationship
The image of target object whether meet certain picture quality, wherein can be the image of the captured target object of judgement
In whether include target object integrity profile, whether be also possible in the image for judging captured target object target object
Clearly, whether the fringe region of target object is fuzzy etc., it is not limited here.
Further, when the image of the target object of multiple cameras shooting is unsatisfactory for certain picture quality, for example, more
There is no the integrity profile for including target object or multiple cameras to shoot in the image of target object captured by a camera
Target object is fuzzy etc. in the image of target object, can be based on the space letter for the target object that at least one first image obtains
Breath adjusts the relative positional relationship of multiple camera, and according to multiple cameras behind adjustment relative position simultaneously to target
Scene is shot, and multiple second images are obtained, wherein the picture quality of the second image is better than the picture quality of the first image.
Further, the relative positional relationship of multiple cameras, the opposite position of adjustable multiple cameras are adjusted according to spatial information
It sets, wherein can be the position of any camera of fixed multiple camera, mobile other multiple camera shootings for removing the camera
The position of head can also be each of mobile multiple camera camera.As an implementation, it is also based on
The spatial information of target object adjusts the relative angle of multiple camera, wherein can be to each in multiple camera
The angle of a camera is adjusted, and also can choose the part camera in multiple camera, and to the angle of its camera
Degree is adjusted.As another embodiment, based on the spatial information of target object, multiple camera can be adjusted simultaneously
Relative position and relative angle, and by adjusting multiple cameras after relative positional relationship simultaneously target scene is clapped
It takes the photograph, to obtain picture quality more preferably multiple second images.
Step S140: picture material benefit is carried out to the target object based on preset algorithm and the multiple second image
It repays, obtains multiple target object images, the corresponding visual angle of the multiple target object image includes the multiple second image
Visual angle other than corresponding visual angle and the corresponding visual angle of the multiple second image.
In the present embodiment, mobile terminal is by adjusting multiple cameras after relative positional relationship simultaneously to target scene
Shot multiple second images with acquisition, wherein multiple second images only include the figure under a part of visual angle of target object
Picture, i.e., the image under the corresponding visual angle of multiple cameras on mobile terminal can not include under all visual angles of the target object
Image can obtain target object front and one for example, shoot by front of multiple cameras to target object
The image of surface, and the image of target object behind is unknown.Thus it is possible to pass through preset algorithm simulated target object
Image under the visual angle other than the corresponding visual angle of multiple second images, it can by preset algorithm by target object multiple
Image under visual angle other than the corresponding visual angle of second image carries out picture material compensation, it is hereby achieved that including multiple second
The target object image under visual angle under the corresponding visual angle of image and other than the corresponding visual angle of multiple second images.
The image processing method that the application one embodiment provides, mobile terminal pass through at least one of multiple cameras
Camera shoots target scene, obtains at least one first image;Object is determined from least one first image
Body, and obtain the spatial information of target object;The relative positional relationship of multiple cameras is adjusted based on spatial information, and passes through tune
Multiple cameras after whole relative positional relationship simultaneously shoot target scene, obtain multiple second images;Based on default
Algorithm and multiple second images carry out picture material compensation to target object, obtain multiple target object images.The application is logical
It crosses and target scene is shot with different perspectives simultaneously according to spatial information multiple cameras adjusted of object, to obtain
The image at other visual angles promotes display effect to improve picture quality.
Referring to Fig. 4, Fig. 4 shows the flow diagram of the image processing method of another embodiment of the application offer.
Described image processing method is applied to above-mentioned mobile terminal, will be explained in detail below for process shown in Fig. 4, described
Method can specifically include following steps:
Step S210: the mobile terminal by least one camera in the multiple camera to target scene into
Row shooting, obtains at least one first image.
Step S220: target object is determined from least one described first image, and obtains the sky of the target object
Between information.
Wherein, the specific descriptions of step S210- step S220 please refer to step S110- step S120, and details are not described herein.
Step S230: the relative positional relationship of the multiple camera is obtained.
In the present embodiment, at least one obtained can be shot based at least one camera in multiple camera
First image, to obtain the relative positional relationship of multiple cameras.Specifically, as a kind of mode, can from this at least one
First image chooses any two the first images, and corresponding one group of characteristic point, this feature are chosen from this two first images
Point can be the characteristic point of target object, and the characteristic point being also possible in the image in addition to target object is special based on this group
The coordinate of sign point by calculate can be obtained relative translation distance between corresponding two cameras of two first images and
Relative rotation matrices, and then the relative positional relationship of available multiple camera.It alternatively, can be by obtaining
The inner parameter for taking each camera of multiple cameras is measured by camera calibration and is obtained between multiple camera
Relative position (i.e. spin matrix, translation distance of the camera relative to another camera).
Step S240: judge whether the spatial information matches with the relative positional relationship.
It in the present embodiment, can be by judging the relative position between the spatial information of target object and multiple cameras
Whether relationship matches to determine the need for adjusting the relative positional relationship between multiple camera.
Further, in the present embodiment, spatial information may include dimension information and depth information, and step S240 can be with
The following steps are included:
Step S241: being based on the dimension information and the depth information, obtains the multiple camera described opposite
Shoot to the target scene region of the target object obtained under positional relationship.
In the present embodiment, it can be obtained multiple by the dimension information and depth information of the target object obtained
Camera shoot to the target scene region of target object obtained under relative positional relationship.Specifically, according to
Multiple cameras can be calculated in relative position by principle of triangulation in the dimension information and depth information of target object
Shoot to target scene the region of target object obtained under relationship.It is understood that the size of target object compared with
Hour, the region that multiple camera shoot target object obtained to target scene under relative positional relationship may
More complete, when the size of target object is larger, multiple camera shoots target scene under relative positional relationship
The region of target object obtained may not include the profile of complete target object.
Step S242: judge whether the region is not more than predeterminable area.
In the present embodiment, it can be determined that multiple cameras shoot target scene under relative positional relationship and obtained
Whether the region of the target object obtained is not more than predeterminable area to determine the spatial information and multiple camera of target object
Whether relative positional relationship matches.Wherein, predeterminable area can be that user is pre-set, be also possible to based on target object
Type is arranged by mobile terminal or server, predeterminable area can be can include the entire profile of target object region,
This is without limitation.
Step S243: when the region is not more than the predeterminable area, determine the spatial information not with it is described opposite
Positional relationship matching.
In the present embodiment, when the region is not more than predeterminable area, it can be understood as based on the region currently obtained
The integrity profile of target object cannot be obtained, that is, determines that spatial information is not matched with relative positional relationship.
As an implementation, can dimension information and depth information based on target object, obtain multiple cameras
Shoot to target scene the quantity of the characteristic point of target object obtained under relative positional relationship.Further, may be used
Shoot to target scene the characteristic point of target object obtained to judge multiple cameras under relative positional relationship
Whether quantity is not more than preset quantity.Wherein, it is pre-set to can be user for preset quantity, is also possible to based on target object
Type be arranged by mobile terminal or server, preset quantity can be the feature that can describe all features of target object
The quantity of point, is also possible to that the quantity of the characteristic point of the essential characteristic of target object can be described, it is not limited here.Work as acquisition
Characteristic point quantity be not more than preset quantity when, it can be understood as the characteristic point currently obtained is not enough to describe target object
Essential characteristic or all features determine that spatial information is not matched with relative positional relationship.
Step S250: when the spatial information is not matched with the relative positional relationship, the multiple camera is adjusted
Relative positional relationship.
In the present embodiment, spatial information is not matched with the relative positional relationship of multiple camera, can be multiple
The image of camera target object captured under relative positional relationship is unsatisfactory for certain picture quality, for example, it may be
Do not include the integrity profile of target object in the image of captured target object, is also possible to the figure of captured target object
The Edge region blur etc. of target object as in;It is also possible to multiple cameras to carry out target scene under relative positional relationship
The region of target object obtained is shot no more than predeterminable area, can also be that multiple cameras are right under relative positional relationship
Target scene shoot the quantity of the characteristic point of target object obtained no more than preset quantity, it is not limited here.
It further, can when the spatial information of target object is not matched with the relative positional relationship of multiple camera
To adjust the relative positional relationship of multiple camera, to obtain more preferably picture quality.Further, according to spatial information tune
The relative positional relationship of whole multiple cameras, the relative position of adjustable multiple cameras, wherein it is multiple to can be fixation
The position of any camera of camera, the position of mobile other the multiple cameras for removing the camera can also be mobile and be somebody's turn to do
Each of multiple cameras camera.As an implementation, it is also based on the spatial information adjustment of target object
The relative angle of multiple camera, wherein the angle of each of multiple camera camera can be adjusted,
Also it can choose the part camera in multiple camera, and the angle of its camera be adjusted.As another real
Mode is applied, the spatial information based on target object can adjust relative position and the relative angle of multiple camera simultaneously, with
Obtain more preferably picture quality.
Step S260: the target scene is clapped simultaneously by adjusting multiple cameras after relative positional relationship
It takes the photograph, obtains multiple second images.
Step S270: picture material benefit is carried out to the target object based on preset algorithm and the multiple second image
It repays, obtains multiple target object images, the corresponding visual angle of the multiple target object image includes the multiple second image
Visual angle other than corresponding visual angle and the corresponding visual angle of the multiple second image.
Wherein, the specific descriptions of step S260- step S270 please refer to step S130- step S140, and details are not described herein.
The image processing method that another embodiment of the application provides, mobile terminal pass through at least one in multiple cameras
A camera shoots target scene, obtains at least one first image, and target is determined from least one first image
Object, and obtain the spatial information of target object, obtains the relative positional relationship of multiple cameras, judge spatial information whether with
Relative positional relationship matching, when spatial information is not matched with relative positional relationship, the relative position for adjusting multiple cameras is closed
System simultaneously shoots target scene by adjusting multiple cameras after relative positional relationship, obtains multiple second images,
Picture material compensation is carried out to target object based on preset algorithm and multiple second images, obtains multiple target object images,
The corresponding visual angle of multiple target object images includes that the corresponding visual angle of multiple second images and multiple second images are corresponding
Visual angle other than visual angle.The present embodiment, which passes through, judges whether spatial information matches with the relative positional relationship of multiple cameras, from
And determine whether to match the relative positional relationship of camera, it is hereby achieved that more preferably picture quality.
Referring to Fig. 5, Fig. 5 is to show the process signal of the image processing method of the application further embodiment offer
Figure.Described image processing method is applied to above-mentioned mobile terminal, will be explained in detail below for process shown in fig. 5,
The method can specifically include following steps:
Step S310: the mobile terminal by least one camera in the multiple camera to target scene into
Row shooting, obtains at least one first image.
Step S320: target object is determined from least one described first image, and obtains the sky of the target object
Between information.
Step S330: adjusting the relative positional relationship of the multiple camera based on the spatial information, and by adjusting
Multiple cameras after relative positional relationship simultaneously shoot the target scene, obtain multiple second images.
Wherein, the specific descriptions of step S310- step S330 please refer to step S110- step S130, and details are not described herein.
Step S340: picture material benefit is carried out to the target object based on preset algorithm and the multiple second image
It repays, obtains multiple target object images, the corresponding visual angle of the multiple target object image includes the multiple second image
Visual angle other than corresponding visual angle and the corresponding visual angle of the multiple second image.
Further, in the present embodiment, step S340 may comprise steps of:
Step S341A: the Partial Feature parameter of the target object is extracted from the multiple second image.
In the present embodiment, target scene is clapped simultaneously based on multiple cameras after adjustment relative positional relationship
It takes the photograph, and obtains multiple second images, it can be corresponding to be taken at the second image by carrying out processing to multiple second image
Visual angle other than visual angle image.As a kind of mode, mobile terminal can extract target object from the second image
Partial Feature parameter, wherein characteristic parameter may include brightness, edge, texture and color etc..It is extracted from multiple second images
The Partial Feature parameter of target object, can be by positioning target object region, by the image of target object region
Gray processing divides the image into small connected region, then acquire connected region in each pixel gradient or edge direction
These set of histograms finally can be obtained by feature vector by histogram altogether.As a kind of mode, it is also based on nerve
Network, by the way that the corresponding image in target object region in multiple second images is inputted in neural network, so as to
Export the Partial Feature parameter of the target object extracted from multiple second images.It can also be passed through based on multiple second images
Other modes extract the Partial Feature parameter of target object, it is not limited here.
Step S342A: the object type of the target object is identified according to the Partial Feature parameter, and is based on the object
The global feature parameter of target object described in body type queries.
In the present embodiment, the Partial Feature parameter that target object can be extracted based on multiple second images, to right
Target object carries out picture material compensation, needs to obtain the global feature parameter of target object.Mobile terminal can be based on image
Identification technology is compared according to Partial Feature parameter by characteristic parameter, so that the object type of the target object is identified, into one
Step ground, the object type based on target object can inquire the global feature ginseng of corresponding server or network inquiry target object
Number.For example, the Partial Feature parameter such as color of automobile can be extracted from multiple second images when target object is automobile, take turns
Exterior feature, and the type based on the Partial Feature parameter identification automobile extracted, if identifying, automobile is " XX " board automobile, can root
It is investigated that asking the official website of the brand to get the global feature parameter of the automobile.
Step S343A: picture material compensation is carried out to the target object based on the global feature parameter, is obtained described
Multiple target object images.
In the present embodiment, the Partial Feature ginseng based on the target object in multiple first images and multiple first images
Number, can determine the corresponding relationship between the Partial Feature parameter of target object and the first image, get target object
After global feature parameter, it can determine target object image based on the corresponding relationship with the first image.
Further, in the present embodiment, step S340 can with the following steps are included:
Step S341B: the parallax of the multiple second image is obtained.
It in the present embodiment, can be by obtaining two the second images in multiple second images respectively, to this two
Two images carry out identification to obtain the corresponding relationship between two the second images, and are based on principle of triangulation, to calculate
The parallax of second image is obtained, is based on the above method, then the parallax of available multiple second view.Further, also
It can be based on Stereo Matching Algorithm, the calculating of matching cost is carried out to the second image, and carry out cost polymerization, complete matching cost
Polymerization after choose the optimal point of cascade matching cost in a certain range as Corresponding matching point, the corresponding parallax of the match point
The as parallax of second image, specific disparity computation mode is it is not limited here.
Step S342B: the different parallaxes based on the multiple second image obtain pending figure in the target object
As the angle information of the part of content augmentation.
In the present embodiment, mobile terminal can determine target object multiple based on the different parallaxes of multiple second images
The parallax of image corresponding to visual angle other than the corresponding visual angle of second image, and based on the parallax determine in target object into
The angle information of the part of row picture material compensation.Specifically, any two second in multiple second images can be obtained respectively
The coordinate information of the characteristic point of target object in image, and obtain corresponding angle information based on the coordinate information, then can be with
The corresponding relationship of the parallax between the angle information and two second images is obtained, and it is possible thereby to passes through multiple second images
Parallax and multiple second images in target object characteristic point coordinate information obtain it is multiple between angle information and parallax
Corresponding relationship obtains the mapping table between angle information and parallax via obtained multiple corresponding relationships.It is multiple by utilizing
The different parallaxes of second image simulate the parallax of pending picture material compensation part, based on mapping table obtained above obtain to
Carry out the angle information of picture material compensation part.
Step S343B: corresponding picture material is matched and based on described image content to described based on the angle information
Target object carries out picture material compensation, obtains the multiple target object image.
In the present embodiment, the part of pending picture material compensation in the parallax and target object based on the second image
Angle information, match point can be searched on the corresponding virtual view of the angle information, obtain the corresponding pixel of the match point
Point content, is moved to the match point for the pixel content, so as to obtain the virtual view, i.e. target object image.Make
For another way, the coordinate information of the characteristic point of the target object in available multiple second images, based on acquisition
The coordinate information of the characteristic point of the target object of pending picture material compensation can be calculated in angle information, thus according to
The coordinate information of characteristic point is by the image completion of the target object of pending picture material compensatory zone, to obtain multiple targets
Subject image.
Step S350: being synthesized based on the multiple target object image and the multiple second image, is generated multiple
Third image.
In the present embodiment, it by carrying out picture material compensation to multiple second images based on preset algorithm, can obtain
Multiple target object images, wherein the corresponding visual angle of target object image includes the corresponding visual angle of multiple second images and more
Visual angle other than the corresponding visual angle of a second image.In some embodiments, it only includes difference that target object image, which can be,
The image of target object under angle can also synthesize multiple target object images and multiple second images, generate more
A third image, wherein it can be by choosing any one second image in multiple second images, and determine second image
The image in the region in the second image where target object is replaced in region where middle target object with target object image, from
And generate third image.Further, target object image can also be covered on to the area in the second image where target object
The image in domain, to generate third image.Above-mentioned synthesis mode is merely illustrative, is not intended as limiting.
In some embodiments, by multiple target object images either objective subject image and multiple second images
In any second image synthesized, the target object image and second image can be handled.It specifically, can be with
Obtain the target object image in the target object image fringe region and the second image correspond to position image color,
Edge, Texture eigenvalue calculate the similarity between feature, and judge whether similarity is greater than similarity threshold.Work as similarity
When greater than similarity threshold, where showing that the target object image is corresponding in the target object image fringe region and the second image
The features such as the color of the image of position are same or similar, then can synthesize the target object image with second image.Into one
Step ground, when similarity is less than threshold value, the target object in the adjustable target object image fringe region and the second image
Image corresponds to the characteristic parameters such as form and aspect, saturation degree, contrast, the brightness of the image of position, to increase the target object figure
As the target object image corresponds to the similarity between the image of position in fringe region and the second image, so that object
Display of the body image in the second image is more natural, style is consistent.
Step S360: the multiple third image is shown.
In the present embodiment, mobile terminal can also show multiple third images of generation, wherein can be by
Multiple third images are shown in same display interface, can also successively show multiple third images.Further, it can also show
Any third image in multiple third images, and based on other third figures in the multiple third images of handover operation rotational display
Picture.Wherein, handover operation can be it is as shown in FIG. 6 mobile phone is rotated so that by detection mobile phone rotation angle
Corresponding other images shown in multiple third images of degree.It can also be as shown in Figure 7 in the operation interface of mobile terminal
It is slided, to switched over to third image to show other images in multiple third images.
The image processing method that the application further embodiment provides, mobile terminal pass through at least one in multiple cameras
A camera shoots target scene, obtains at least one first image, and target is determined from least one first image
Object, and the spatial information of target object is obtained, the relative positional relationship of multiple cameras is adjusted based on spatial information, and pass through
Multiple cameras after adjustment relative positional relationship simultaneously shoot target scene, obtain multiple second images, based on pre-
Imputation method and multiple second images carry out picture material compensation to target object, obtain multiple target object images, multiple mesh
The corresponding visual angle of mark subject image include the corresponding visual angle of multiple second images and the corresponding visual angle of multiple second images with
Outer visual angle is synthesized based on multiple target object images and multiple second images, generates multiple third images, is shown multiple
Third image.Compared to embodiment shown in Fig. 3, the present embodiment can also synthesize target object image and the second image, raw
At and show third image, to promote display effect.
Referring to Fig. 8, Fig. 8 shows the module frame chart of image processing apparatus 200 provided by the embodiments of the present application.The figure
As processing unit 200 is applied to above-mentioned mobile terminal.It will be illustrated below for block diagram described in Fig. 8, described image processing
Device 200 includes: shooting module 210, determining module 220, adjustment module 230 and picture material compensating module 240, in which:
Shooting module 210, for the mobile terminal by least one camera in the multiple camera to mesh
Mark scene is shot, at least one first image is obtained.
Determining module 220 for determining target object from least one described first image, and obtains the object
The spatial information of body.
Module 230 is adjusted, for adjusting the relative positional relationship of the multiple camera based on the spatial information, and is led to
Multiple cameras after crossing adjustment relative positional relationship simultaneously shoot the target scene, obtain multiple second images.
Further, the adjustment module 230 further include: relative position acquisition submodule, matching judgment submodule and
The first adjustment submodule, in which:
Relative position acquisition submodule, for obtaining the relative positional relationship of the multiple camera.
Matching judgment submodule, for judging whether the spatial information matches with the relative positional relationship.
Further, the matching judgment submodule further include: area acquisition unit, region decision unit and determining list
Member, in which:
Area acquisition unit obtains the multiple camera and exists for being based on the dimension information and the depth information
Shoot to the target scene region of the target object obtained under the relative positional relationship.
Further, the area acquisition unit further include: quantity obtains subelement, in which:
Quantity obtains subelement, for being based on the dimension information and the depth information, obtains the multiple camera
Shoot to the target scene quantity of the characteristic point of the target object obtained under the relative positional relationship.
Region decision unit, for judging whether the region is not more than predeterminable area.
Further, the region decision unit further include: quantity judgment sub-unit, in which:
Quantity judgment sub-unit, for judging whether the quantity of the characteristic point is not more than preset quantity.
Determination unit, for when the region be not more than the predeterminable area when, determine the spatial information not with it is described
Relative positional relationship matching.
Further, the determination unit further include: determine subelement, in which:
Subelement is determined, for when the quantity of the characteristic point is not more than the preset quantity, determining the space letter
Breath is not matched with the relative positional relationship.
The first adjustment submodule is used for when the spatial information is not matched with the relative positional relationship, described in adjustment
The relative positional relationship of multiple cameras.
Further, the adjustment module 230 further include: second adjustment submodule, in which:
Second adjustment submodule, for adjusting the relative distance of the multiple camera, and/or the multiple camera shooting of adjustment
The relative angle of head.
Picture material compensating module 240, for being based on preset algorithm and the multiple second image to the object
Body carries out picture material compensation, obtains multiple target object images, the corresponding visual angle of the multiple target object image includes
Visual angle other than the multiple corresponding visual angle of second image and the corresponding visual angle of the multiple second image.
Further, described image content augmentation module 240 includes: extracting sub-module, inquiry submodule and the first benefit
Repay submodule, in which:
Extracting sub-module, for extracting the Partial Feature parameter of the target object from the multiple second image.
Submodule is inquired, for identifying the object type of the target object according to the Partial Feature parameter, and is based on
The object type inquires the global feature parameter of the target object.
First compensation submodule, for carrying out picture material benefit to the target object based on the global feature parameter
It repays, obtains the multiple target object image.
Further, described image content augmentation module 240 further include: parallax acquisition submodule, angle information obtain son
Module and the second compensation submodule, in which:
Parallax acquisition submodule, for obtaining the parallax of the multiple second image.
Angle information acquisition submodule obtains the object for the different parallaxes based on the multiple second image
The angle information of the part of pending picture material compensation in body.
Second compensation submodule, for matching corresponding picture material based on the angle information and based in described image
Hold and picture material compensation is carried out to the target object, obtains the multiple target object image.
Further, described image processing unit 200 can also include: synthesis module, the first display module and second
Display module, in which:
Synthesis module is generated for being synthesized based on the multiple target object image and the multiple second image
Multiple third images.
First display module, for showing the multiple third image.
Second display module for showing any third image in the multiple third image, and is based on handover operation
Other third images in the multiple third image of rotational display.
It is apparent to those skilled in the art that for convenience and simplicity of description, foregoing description device and
The specific work process of module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, the mutual coupling of module can be electrical property, mechanical or other
The coupling of form.
It, can also be in addition, can integrate in a processing module in each functional module in each embodiment of the application
It is that modules physically exist alone, can also be integrated in two or more modules in a module.Above-mentioned integrated mould
Block both can take the form of hardware realization, can also be realized in the form of software function module.
Referring to Fig. 9, it illustrates a kind of structural block diagrams of mobile terminal 100 provided by the embodiments of the present application.The movement
Terminal 100, which can be smart phone, tablet computer, e-book etc., can run the mobile terminal of application program.In the application
Mobile terminal 100 may include one or more such as lower component: processor 110, memory 120, screen 130, camera 140 with
And one or more application program, wherein one or more application programs can be stored in memory 120 and be configured as
It is executed by one or more processors 110, one or more programs are configured to carry out as described in preceding method embodiment
Method.
Wherein, processor 110 may include one or more processing core.Processor 110 utilizes various interfaces and route
The various pieces in entire mobile terminal 100 are connected, by running or executing the instruction being stored in memory 120, program, generation
Code collection or instruction set, and the data being stored in memory 120 are called, execute the various functions and processing of mobile terminal 100
Data.Optionally, processor 110 can be using Digital Signal Processing (Digital Signal Processing, DSP), scene
Programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable
Logic Array, PLA) at least one of example, in hardware realize.Processor 110 can integrating central processor (Central
Processing Unit, CPU), in graphics processor (Graphics Processing Unit, GPU) and modem etc.
One or more of combinations.Wherein, the main processing operation system of CPU, user interface and application program etc.;GPU is for being responsible for
Show the rendering and drafting of content;Modem is for handling wireless communication.It is understood that above-mentioned modem
It can not be integrated into processor 110, be realized separately through one piece of communication chip.
Memory 120 may include random access memory (Random Access Memory, RAM), also may include read-only
Memory (Read-Only Memory).Memory 120 can be used for store instruction, program, code, code set or instruction set.It deposits
Reservoir 120 may include storing program area and storage data area, wherein the finger that storing program area can store for realizing operating system
Enable, for realizing at least one function instruction (such as touch function, sound-playing function, image player function etc.), be used for
Realize the instruction etc. of following each embodiments of the method.Storage data area can also store the number that terminal 100 is created in use
According to (such as phone directory, audio, video data, chat record data) etc..
Further, the screen 130 can be liquid crystal display (Liquid Crystal Display, LCD), can be with
For Organic Light Emitting Diode (Organic Light-Emitting Diode, OLED) etc..The screen 130 for show by with
The information of family input is supplied to the information of user and the various graphical user interface of the mobile terminal, these graphical users
Interface can be made of figure, text, icon, number, video and any combination thereof.
Camera 140 can be fixedly installed on mobile terminal 100, can be slideably positioned in mobile terminal 100, can also turn
It is dynamic to be set to mobile terminal 100, it is not limited here.
Referring to Fig. 10, it illustrates a kind of structural frames of computer-readable storage medium provided by the embodiments of the present application
Figure.Program code is stored in the computer-readable storage medium 300, said program code can be called by processor and be executed
State method described in embodiment of the method.
Computer-readable storage medium 300 can be (the read-only storage of electrically erasable of such as flash memory, EEPROM
Device), the electronic memory of EPROM, hard disk or ROM etc.Optionally, computer-readable storage medium 300 includes non-easy
The property lost computer-readable medium (non-transitory computer-readable storage medium).Computer can
Read the memory space that storage medium 300 has the program code 310 for executing any method and step in the above method.These journeys
Sequence code can read or be written to this one or more computer journey from one or more computer program product
In sequence product.Program code 310 can for example be compressed in a suitable form.
In conclusion image processing method provided by the embodiments of the present application, device, mobile terminal and storage medium, are moved
Dynamic terminal shoots target scene by least one camera in multiple cameras, obtains at least one first figure
Picture;Target object is determined from least one first image, and obtains the spatial information of target object;It is adjusted based on spatial information
The relative positional relationship of multiple cameras, and by adjusting multiple cameras after relative positional relationship simultaneously to target scene into
Row shooting, obtains multiple second images;Picture material benefit is carried out to target object based on preset algorithm and multiple second images
It repays, obtains multiple target object images.The application by the multiple cameras adjusted of the spatial information according to object simultaneously with
Different perspectives shoots target scene, to obtain the image at other visual angles, to improve picture quality, promotes display effect
Fruit.
Finally, it should be noted that above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although
The application is described in detail with reference to the foregoing embodiments, those skilled in the art are when understanding: it still can be with
It modifies the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;And
These are modified or replaceed, do not drive corresponding technical solution essence be detached from each embodiment technical solution of the application spirit and
Range.
Claims (11)
1. a kind of image processing method, which is characterized in that be applied to mobile terminal, the mobile terminal includes terminal body and more
A camera, the multiple camera are movably set in the terminal body, and the visual angle of the multiple camera is different,
The described method includes:
The mobile terminal shoots target scene by least one camera in the multiple camera, obtains extremely
Few first image;
Target object is determined from least one described first image, and obtains the spatial information of the target object;
The relative positional relationship of the multiple camera is adjusted based on the spatial information, and by adjusting relative positional relationship after
Multiple cameras the target scene is shot simultaneously, obtain multiple second images;
Picture material compensation is carried out to the target object based on preset algorithm and the multiple second image, obtains multiple mesh
Mark subject image, the corresponding visual angle of the multiple target object image include the corresponding visual angle of the multiple second image and
Visual angle other than the corresponding visual angle of the multiple second image.
2. the method according to claim 1, wherein described adjust the multiple camera shooting based on the spatial information
The relative positional relationship of head, comprising:
Obtain the relative positional relationship of the multiple camera;
Judge whether the spatial information matches with the relative positional relationship;
When the spatial information is not matched with the relative positional relationship, the relative position for adjusting the multiple camera is closed
System.
3. according to the method described in claim 2, it is characterized in that, the spatial information includes dimension information and depth information,
It is described to judge whether the spatial information matches with the relative positional relationship, comprising:
Based on the dimension information and the depth information, the multiple camera is obtained under the relative positional relationship to institute
State the region that target scene shoot the target object obtained;
Judge whether the region is not more than predeterminable area;
When the region is not more than the predeterminable area, determine that the spatial information is not matched with the relative positional relationship.
4. according to the method described in claim 3, it is characterized in that, it is described be based on the dimension information and the depth information,
The multiple camera is obtained the target scene is carried out under the relative positional relationship to shoot the target obtained
The region of object, comprising:
Based on the dimension information and the depth information, the multiple camera is obtained under the relative positional relationship to institute
State the quantity that target scene shoot the characteristic point of the target object obtained;
It is described to judge whether the region is not more than predeterminable area, comprising:
Judge whether the quantity of the characteristic point is not more than preset quantity;
When the quantity of the characteristic point is not more than the preset quantity, determine that the spatial information is not closed with the relative position
System's matching.
5. method according to claim 1-4, which is characterized in that described to adjust the opposite of the multiple camera
Positional relationship, comprising:
Adjust the relative distance of the multiple camera, and/or the relative angle of the multiple camera of adjustment.
6. method according to claim 1-4, which is characterized in that described based on preset algorithm and the multiple
Second image carries out picture material compensation to the target object, after obtaining multiple target object images, further includes:
It is synthesized based on the multiple target object image and the multiple second image, generates multiple third images;
Show the multiple third image;Or
It shows any third image in the multiple third image, and is based on the multiple third figure of handover operation rotational display
Other third images as in.
7. method according to claim 1-4, which is characterized in that described based on preset algorithm and the multiple
Second image carries out picture material compensation to the target object, obtains multiple target object images, comprising:
The Partial Feature parameter of the target object is extracted from the multiple second image;
The object type of the target object is identified according to the Partial Feature parameter, and based on described in object type inquiry
The global feature parameter of target object;
Picture material compensation is carried out to the target object based on the global feature parameter, obtains the multiple target object figure
Picture.
8. method according to claim 1-4, which is characterized in that described based on preset algorithm and the multiple
Second image carries out picture material compensation to the target object, obtains multiple target object images, comprising:
Obtain the parallax of the multiple second image;
Different parallaxes based on the multiple second image obtain the portion of pending picture material compensation in the target object
The angle information divided;
Corresponding picture material is matched based on the angle information and figure is carried out to the target object based on described image content
As content augmentation, the multiple target object image is obtained.
9. a kind of image processing apparatus, which is characterized in that be applied to mobile terminal, the mobile terminal includes terminal body and more
A camera, the multiple camera are movably set in the terminal body, and the visual angle of the multiple camera is different,
Described device includes:
Shooting module, for the mobile terminal by least one camera in the multiple camera to target scene into
Row shooting, obtains at least one first image;
Determining module for determining target object from least one described first image, and obtains the sky of the target object
Between information;
Module is adjusted, for adjusting the relative positional relationship of the multiple camera based on the spatial information, and by adjusting
Multiple cameras after relative positional relationship simultaneously shoot the target scene, obtain multiple second images;
Picture material compensating module, for carrying out figure to the target object based on preset algorithm and the multiple second image
As content augmentation, multiple target object images are obtained, the corresponding visual angle of the multiple target object image includes the multiple
Visual angle other than the corresponding visual angle of second image and the corresponding visual angle of the multiple second image.
10. a kind of mobile terminal, which is characterized in that including memory and processor, the memory is couple to the processor,
The memory store instruction, the processor executes claim 1-8 such as and appoints when executed by the processor
Method described in one.
11. a kind of computer-readable storage medium, which is characterized in that be stored with journey in the computer-readable storage medium
Sequence code, said program code can be called by processor and execute the method according to claim 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910579241.9A CN110300264B (en) | 2019-06-28 | 2019-06-28 | Image processing method, image processing device, mobile terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910579241.9A CN110300264B (en) | 2019-06-28 | 2019-06-28 | Image processing method, image processing device, mobile terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110300264A true CN110300264A (en) | 2019-10-01 |
CN110300264B CN110300264B (en) | 2021-03-12 |
Family
ID=68029536
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910579241.9A Active CN110300264B (en) | 2019-06-28 | 2019-06-28 | Image processing method, image processing device, mobile terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110300264B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110740264A (en) * | 2019-10-31 | 2020-01-31 | 重庆工商职业学院 | intelligent camera data rapid acquisition system and acquisition method |
CN110855891A (en) * | 2019-11-26 | 2020-02-28 | 爱菲力斯(深圳)科技有限公司 | Method and device for adjusting camera shooting angle based on human body posture and robot |
CN111083377A (en) * | 2019-12-30 | 2020-04-28 | 维沃移动通信有限公司 | Shooting method and electronic equipment |
CN111314610A (en) * | 2020-02-26 | 2020-06-19 | 维沃移动通信有限公司 | Control method and electronic equipment |
CN112714255A (en) * | 2020-12-30 | 2021-04-27 | 维沃移动通信(杭州)有限公司 | Shooting method, shooting device, electronic equipment and readable storage medium |
CN112784081A (en) * | 2021-01-22 | 2021-05-11 | 维沃移动通信(杭州)有限公司 | Image display method and device and electronic equipment |
CN113015094A (en) * | 2021-02-10 | 2021-06-22 | 维沃移动通信有限公司 | Equipment positioning method and device |
CN113114975A (en) * | 2021-04-07 | 2021-07-13 | 科大讯飞股份有限公司 | Image splicing method and device, electronic equipment and storage medium |
CN113141465A (en) * | 2021-04-20 | 2021-07-20 | 云南云能科技有限公司 | Unmanned aerial vehicle multilayer multi-view-angle crossed live-action image acquisition method and device |
CN113220251A (en) * | 2021-05-18 | 2021-08-06 | 北京达佳互联信息技术有限公司 | Object display method, device, electronic equipment and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1701214A (en) * | 2003-08-13 | 2005-11-23 | 株式会社拓普康 | 3-dimensional measurement device and electronic storage medium |
CN101556697A (en) * | 2008-04-10 | 2009-10-14 | 上海宝康电子控制工程有限公司 | Method and system for motion target tracking based on rapid characteristic points |
CN101651841A (en) * | 2008-08-13 | 2010-02-17 | 华为技术有限公司 | Method, system and equipment for realizing stereo video communication |
CN101840146A (en) * | 2010-04-20 | 2010-09-22 | 夏佳梁 | Method and device for shooting stereo images by automatically correcting parallax error |
CN104568003A (en) * | 2014-12-29 | 2015-04-29 | 国家电网公司 | Remote monitoring system and method for ice coating process of power transmission lines |
CN106228608A (en) * | 2016-08-24 | 2016-12-14 | 集美大学 | A kind of hand-held device for object quick reconfiguration based on binocular and method |
CN106507087A (en) * | 2016-12-05 | 2017-03-15 | 宇龙计算机通信科技(深圳)有限公司 | A kind of terminal imaging method and system |
US9786064B2 (en) * | 2015-01-29 | 2017-10-10 | Electronics And Telecommunications Research Institute | Multi-camera control apparatus and method to maintain location and size of object in continuous viewpoint switching service |
CN107507239A (en) * | 2017-08-23 | 2017-12-22 | 维沃移动通信有限公司 | A kind of image partition method and mobile terminal |
CN107527336A (en) * | 2016-06-22 | 2017-12-29 | 北京疯景科技有限公司 | Relative position of lens scaling method and device |
CN108694729A (en) * | 2017-04-07 | 2018-10-23 | 深圳市腾盛工业设备有限公司 | Localization method, unit based on image detection and storage medium |
US10230900B2 (en) * | 2016-10-20 | 2019-03-12 | Hyundai Motor Company | Apparatus for controlling exposure of multi-view camera, system including the same, and method for controlling exposure of multi-view camera |
CN109816702A (en) * | 2019-01-18 | 2019-05-28 | 苏州矽典微智能科技有限公司 | A kind of multiple target tracking device and method |
-
2019
- 2019-06-28 CN CN201910579241.9A patent/CN110300264B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1701214A (en) * | 2003-08-13 | 2005-11-23 | 株式会社拓普康 | 3-dimensional measurement device and electronic storage medium |
CN101556697A (en) * | 2008-04-10 | 2009-10-14 | 上海宝康电子控制工程有限公司 | Method and system for motion target tracking based on rapid characteristic points |
CN101651841A (en) * | 2008-08-13 | 2010-02-17 | 华为技术有限公司 | Method, system and equipment for realizing stereo video communication |
CN101840146A (en) * | 2010-04-20 | 2010-09-22 | 夏佳梁 | Method and device for shooting stereo images by automatically correcting parallax error |
CN104568003A (en) * | 2014-12-29 | 2015-04-29 | 国家电网公司 | Remote monitoring system and method for ice coating process of power transmission lines |
US9786064B2 (en) * | 2015-01-29 | 2017-10-10 | Electronics And Telecommunications Research Institute | Multi-camera control apparatus and method to maintain location and size of object in continuous viewpoint switching service |
CN107527336A (en) * | 2016-06-22 | 2017-12-29 | 北京疯景科技有限公司 | Relative position of lens scaling method and device |
CN106228608A (en) * | 2016-08-24 | 2016-12-14 | 集美大学 | A kind of hand-held device for object quick reconfiguration based on binocular and method |
US10230900B2 (en) * | 2016-10-20 | 2019-03-12 | Hyundai Motor Company | Apparatus for controlling exposure of multi-view camera, system including the same, and method for controlling exposure of multi-view camera |
CN106507087A (en) * | 2016-12-05 | 2017-03-15 | 宇龙计算机通信科技(深圳)有限公司 | A kind of terminal imaging method and system |
CN108694729A (en) * | 2017-04-07 | 2018-10-23 | 深圳市腾盛工业设备有限公司 | Localization method, unit based on image detection and storage medium |
CN107507239A (en) * | 2017-08-23 | 2017-12-22 | 维沃移动通信有限公司 | A kind of image partition method and mobile terminal |
CN109816702A (en) * | 2019-01-18 | 2019-05-28 | 苏州矽典微智能科技有限公司 | A kind of multiple target tracking device and method |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110740264A (en) * | 2019-10-31 | 2020-01-31 | 重庆工商职业学院 | intelligent camera data rapid acquisition system and acquisition method |
CN110740264B (en) * | 2019-10-31 | 2021-06-04 | 重庆工商职业学院 | Intelligent camera data rapid acquisition system and acquisition method |
CN110855891A (en) * | 2019-11-26 | 2020-02-28 | 爱菲力斯(深圳)科技有限公司 | Method and device for adjusting camera shooting angle based on human body posture and robot |
CN111083377A (en) * | 2019-12-30 | 2020-04-28 | 维沃移动通信有限公司 | Shooting method and electronic equipment |
CN111314610B (en) * | 2020-02-26 | 2022-03-11 | 维沃移动通信有限公司 | Control method and electronic equipment |
CN111314610A (en) * | 2020-02-26 | 2020-06-19 | 维沃移动通信有限公司 | Control method and electronic equipment |
CN112714255A (en) * | 2020-12-30 | 2021-04-27 | 维沃移动通信(杭州)有限公司 | Shooting method, shooting device, electronic equipment and readable storage medium |
WO2022156703A1 (en) * | 2021-01-22 | 2022-07-28 | 维沃移动通信(杭州)有限公司 | Image display method and apparatus, and electronic device |
CN112784081A (en) * | 2021-01-22 | 2021-05-11 | 维沃移动通信(杭州)有限公司 | Image display method and device and electronic equipment |
CN113015094A (en) * | 2021-02-10 | 2021-06-22 | 维沃移动通信有限公司 | Equipment positioning method and device |
CN113015094B (en) * | 2021-02-10 | 2022-11-25 | 维沃移动通信有限公司 | Equipment positioning method and device |
CN113114975A (en) * | 2021-04-07 | 2021-07-13 | 科大讯飞股份有限公司 | Image splicing method and device, electronic equipment and storage medium |
CN113114975B (en) * | 2021-04-07 | 2023-04-18 | 科大讯飞股份有限公司 | Image splicing method and device, electronic equipment and storage medium |
CN113141465A (en) * | 2021-04-20 | 2021-07-20 | 云南云能科技有限公司 | Unmanned aerial vehicle multilayer multi-view-angle crossed live-action image acquisition method and device |
CN113220251A (en) * | 2021-05-18 | 2021-08-06 | 北京达佳互联信息技术有限公司 | Object display method, device, electronic equipment and storage medium |
CN113220251B (en) * | 2021-05-18 | 2024-04-09 | 北京达佳互联信息技术有限公司 | Object display method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110300264B (en) | 2021-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110300264A (en) | Image processing method, device, mobile terminal and storage medium | |
CN111667520B (en) | Registration method and device for infrared image and visible light image and readable storage medium | |
US20220012495A1 (en) | Visual feature tagging in multi-view interactive digital media representations | |
US11361459B2 (en) | Method, device and non-transitory computer storage medium for processing image | |
WO2015180659A1 (en) | Image processing method and image processing device | |
US8269722B2 (en) | Gesture recognition system and method thereof | |
JP6685827B2 (en) | Image processing apparatus, image processing method and program | |
US20190080457A1 (en) | Electronic device and method for automatic human segmentation in image | |
US10824910B2 (en) | Image processing method, non-transitory computer readable storage medium and image processing system | |
CN108200334B (en) | Image shooting method and device, storage medium and electronic equipment | |
CN107818305A (en) | Image processing method, device, electronic equipment and computer-readable recording medium | |
US20110273369A1 (en) | Adjustment of imaging property in view-dependent rendering | |
US11783443B2 (en) | Extraction of standardized images from a single view or multi-view capture | |
US20200258309A1 (en) | Live in-camera overlays | |
US20180357819A1 (en) | Method for generating a set of annotated images | |
CN110288534B (en) | Image processing method, device, electronic equipment and storage medium | |
CN106165387A (en) | Light field processing method | |
US20140009503A1 (en) | Systems and Methods for Tracking User Postures to Control Display of Panoramas | |
CN108810406A (en) | Portrait light efficiency processing method, device, terminal and computer readable storage medium | |
WO2014008320A1 (en) | Systems and methods for capture and display of flex-focus panoramas | |
CN109525786A (en) | Method for processing video frequency, device, terminal device and storage medium | |
CN110505398A (en) | A kind of image processing method, device, electronic equipment and storage medium | |
EP3035242B1 (en) | Method and electronic device for object tracking in a light-field capture | |
CN110266955A (en) | Image processing method, device, electronic equipment and storage medium | |
CN107479715A (en) | The method and apparatus that virtual reality interaction is realized using gesture control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |