CN110266926B - Image processing method, image processing device, mobile terminal and storage medium - Google Patents

Image processing method, image processing device, mobile terminal and storage medium Download PDF

Info

Publication number
CN110266926B
CN110266926B CN201910579229.8A CN201910579229A CN110266926B CN 110266926 B CN110266926 B CN 110266926B CN 201910579229 A CN201910579229 A CN 201910579229A CN 110266926 B CN110266926 B CN 110266926B
Authority
CN
China
Prior art keywords
images
target object
target
image
mobile terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910579229.8A
Other languages
Chinese (zh)
Other versions
CN110266926A (en
Inventor
杜鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910579229.8A priority Critical patent/CN110266926B/en
Publication of CN110266926A publication Critical patent/CN110266926A/en
Application granted granted Critical
Publication of CN110266926B publication Critical patent/CN110266926B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Abstract

The application discloses an image processing method, an image processing device, a mobile terminal and a storage medium, and relates to the technical field of mobile terminals. The method is applied to the mobile terminal and comprises the following steps: the mobile terminal shoots a target scene at different visual angles through the plurality of cameras simultaneously to obtain a plurality of first images, obtains one or more objects in the target scene based on the plurality of first images, determines a target object from the one or more objects, obtains a first object image corresponding to the target object, obtains a parameter configuration instruction, updates parameter information of the target object based on the parameter configuration instruction, obtains a second object image corresponding to the target object, updates the first object image of any one of the plurality of first images into the second object image, and obtains a plurality of second images. According to the scheme, the target scene is shot by the plurality of cameras at different visual angles, images at other visual angles can be obtained, so that the display effect is improved, and the user experience is improved.

Description

Image processing method, image processing device, mobile terminal and storage medium
Technical Field
The present application relates to the field of mobile terminal technologies, and in particular, to an image processing method and apparatus, a mobile terminal, and a storage medium.
Background
With the development of scientific technology, mobile terminals have become one of the most common electronic products in daily life. Moreover, the user often takes a picture through the mobile terminal, but the current shooting only can acquire images at fixed angles, and the user needs to shoot images at all angles when the user wants to acquire images at all angles, which is tedious in operation.
Disclosure of Invention
In view of the above problems, the present application proposes an image processing method, apparatus, mobile terminal, and storage medium to solve the above problems.
In a first aspect, an embodiment of the present application provides an image processing method, which is applied to a mobile terminal, and the method includes: the mobile terminal shoots a target scene at different visual angles through the plurality of cameras at the same time to obtain a plurality of first images; acquiring one or more objects in the target scene based on the plurality of first images, determining a target object from the one or more objects, and acquiring a first object image corresponding to the target object; acquiring a parameter configuration instruction, updating parameter information of the target object based on the parameter configuration instruction, and acquiring a second object image corresponding to the target object, wherein the first object image and the second object image have different corresponding visual angles; and updating a first object image of any one of the plurality of first images to the second object image to obtain a plurality of second images.
In a second aspect, an embodiment of the present application provides an image processing apparatus, which is applied to a mobile terminal, and the apparatus includes: the shooting module is used for shooting a target scene at different visual angles through the plurality of cameras by the mobile terminal to obtain a plurality of first images; an object obtaining module, configured to obtain one or more objects in the target scene based on the plurality of first images, determine a target object from the one or more objects, and obtain a first object image corresponding to the target object; the instruction acquisition module is used for acquiring a parameter configuration instruction, updating parameter information of the target object based on the parameter configuration instruction, and acquiring a second object image corresponding to the target object, wherein the first object image and the second object image have different corresponding visual angles; and the updating module is used for updating the first object image of any one of the plurality of first images into the second object image to obtain a plurality of second images.
In a third aspect, an embodiment of the present application provides a mobile terminal, including a memory and a processor, where the memory is coupled to the processor, and the memory stores instructions, and the processor executes the above method when the instructions are executed by the processor.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a program code is stored, and the program code can be called by a processor to execute the above method.
The embodiment of the application provides an image processing method, an image processing device, a mobile terminal and a storage medium, wherein the method is applied to the mobile terminal and comprises the following steps: the mobile terminal shoots a target scene at different visual angles through the plurality of cameras simultaneously to obtain a plurality of first images, obtains one or more objects in the target scene based on the plurality of first images, determines a target object from the one or more objects, obtains a first object image corresponding to the target object, obtains a parameter configuration instruction, updates parameter information of the target object based on the parameter configuration instruction, obtains a second object image corresponding to the target object, updates the first object image of any one of the plurality of first images into the second object image, and obtains a plurality of second images. According to the scheme, the target scene is shot by the plurality of cameras at different visual angles, images at other visual angles can be obtained, so that the display effect is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 illustrates a schematic structural diagram of a mobile terminal according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating an image processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic interface diagram of a mobile terminal provided in an embodiment of the present application;
FIG. 4 is a flow chart illustrating a further image processing method provided by an embodiment of the present application;
fig. 5 is a schematic diagram illustrating an operation of a mobile terminal according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating still another operation of a mobile terminal according to an embodiment of the present application;
FIG. 7 is a flow chart illustrating a further image processing method provided by an embodiment of the present application;
fig. 8 shows a block diagram of an image processing apparatus provided in an embodiment of the present application;
fig. 9 is a block diagram of a mobile terminal for executing an image processing method according to an embodiment of the present application;
fig. 10 illustrates a storage unit for storing or carrying program codes for implementing an image processing method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
At present, the photographing function becomes a standard configuration of most mobile terminals, a mobile terminal user can carry the mobile terminal with him and record the good moment around the user through the mobile terminal, in addition, with the rapid development of the intellectualization of the mobile terminal, many users need new and characteristic pictures to be displayed outwards every day, and the requirements of the mobile terminal user on the quality of the pictures and the photographing mode function are higher and higher. However, at present, when a user wants to obtain a picture of a target object at each angle, the user needs to shoot each angle of the object, and needs to keep a certain track during shooting, so that the user is complicated to operate and has certain operation difficulty.
In view of the above problems, the inventor finds and provides an image processing method, an image processing device, a mobile terminal and a storage medium provided in the embodiments of the present application through a long-term research, and images of other more viewing angles can be obtained by simultaneously shooting a target scene at different viewing angles through a plurality of cameras, so that a display effect is improved, and user experience is improved. The specific image processing method is described in detail in the following embodiments.
As shown in fig. 1, the mobile terminal 100 according to the present embodiment may include a plurality of cameras 140, and the plurality of cameras 140 have different viewing angles, and the plurality of cameras 140 may all be front-facing cameras of the mobile terminal, that is, cameras located on the same side of an operation interface of the mobile terminal; or both the cameras can be rear cameras of the mobile terminal, that is, the camera is positioned on the back of an operation interface with the mobile terminal (as shown in fig. 1); the mobile terminal may further include a rear camera, and a camera located at another position, which is not limited herein. Fig. 1 is a schematic diagram, and does not limit the mobile terminal according to the present invention.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an image processing method according to an embodiment of the present application. The image processing method is used for shooting the target scene at different visual angles through the multiple cameras at the same time, and images of other more visual angles can be obtained, so that the display effect is improved, and the user experience is improved. In a specific embodiment, the image processing method is applied to the image processing apparatus 200 shown in fig. 8 and the mobile terminal 100 (fig. 9) configured with the image processing apparatus 200. The following will describe a specific process of this embodiment by taking a mobile terminal as an example, and it is understood that the mobile terminal applied in this embodiment may be an electronic device including a camera, such as a smart phone, a tablet computer, a wearable mobile terminal, and the like, and is not limited specifically herein. As will be described in detail with respect to the flow shown in fig. 2, the image processing method may specifically include the following steps:
step S110: the mobile terminal shoots a target scene at different visual angles through the plurality of cameras simultaneously to obtain a plurality of first images.
The mobile terminal can shoot a target scene at different visual angles through a plurality of cameras simultaneously, as a mode, the mobile terminal can simultaneously start the plurality of cameras to shoot the target scene when a user clicks an icon of shooting software to enter a shooting interface, and can simultaneously start the plurality of cameras and shoot the target scene after the user selects a preset shooting mode, so that the target scene can be shot at the same time through the plurality of cameras, a plurality of first images of the target scene at different visual angles are obtained, and the visual angles of the target scenes corresponding to the different first images are different.
Step S120: one or more objects in the target scene are acquired based on the plurality of first images, a target object is determined from the one or more objects, and a first object image corresponding to the target object is acquired.
In this embodiment, the mobile terminal may capture a target scene from different viewing angles through a plurality of cameras at the same time to obtain a plurality of first images. The mobile terminal can identify the plurality of first images obtained by shooting based on an image identification technology, and identify one or more objects of the target scene from the plurality of first images. Meanwhile, the target object may be determined from the one or more recognized objects, where the target object may be determined by a user operation, for example, the target object may be determined based on a click operation on the operation interface, or a focus position in each of the plurality of first images may be recognized, and an object corresponding to the focus position may be determined as the target object. Further, a plurality of first object images corresponding to the target object may be acquired from the plurality of first images, respectively, based on the determined target object, and the first object images may be images including only the target object and not including other areas than the target object.
As an embodiment, any one of the first images may be selected to determine the target object, for example, the plurality of first images may be sequentially displayed, the user selects any one of the first images, the any one of the first images is subjected to image recognition to obtain one or more objects in the target scene included in the first image, and the target object is determined therefrom.
Step S130: acquiring a parameter configuration instruction, updating the parameter information of the target object based on the parameter configuration instruction, and acquiring a second object image corresponding to the target object, wherein the first object image and the second object image have different corresponding visual angles.
In this embodiment, the mobile terminal may acquire the first object image, and may further acquire an image of the target object at another viewing angle different from the viewing angle of the first object image, that is, the second object image. The mobile terminal may obtain a parameter configuration instruction, where the parameter configuration instruction may be used to perform parameter configuration on the target object, extract relevant parameter information and parameter adjustment information from the parameter configuration instruction, and update the parameter information of the target object, so as to obtain a second object image corresponding to the target object. And the second object image is the image of the target object with the updated parameter information.
In some embodiments, the parameter configuration instruction may configure an angle of the target object, and based on the parameter configuration instruction, the target rotation angle may be obtained, the target object is rotated based on the target rotation angle, and meanwhile, a plurality of second object images corresponding to the target object during the rotation process may be obtained, where viewing angles corresponding to the plurality of second object images are different. For example, when the target object is an automobile, the automobile is rotated based on the target rotation angle, and as shown in fig. 3, the automobile shown in a in fig. 3 may be rotated, so that a second image 420 shown in b in fig. 3, which has a different viewing angle from that of the first image 410 shown in a in fig. 3, may be acquired.
Step S140: and updating a first object image of any one of the plurality of first images to the second object image to obtain a plurality of second images.
In this embodiment, a plurality of second object images corresponding to the target object are acquired, and the mobile terminal may update a first object image of any one of the plurality of first images to the second object image, so as to obtain a plurality of second images. As one way, updating the first object image to the second object image may be replacing the first object image with the second object image, or overlaying the second object image on the first object image. For example, referring to fig. 3 again, c in fig. 3 is a first image, and the mobile terminal may replace the first object image 410 with the second object image 420, so as to obtain a second image, i.e. an image d in fig. 3.
In the image processing method provided by an embodiment of the application, the mobile terminal shoots a target scene at different viewing angles through the multiple cameras at the same time to obtain multiple first images, obtains one or more objects in the target scene based on the multiple first images, determines a target object from the one or more objects, obtains a first object image corresponding to the target object, obtains a parameter configuration instruction, updates parameter information of the target object based on the parameter configuration instruction to obtain a second object image corresponding to the target object, updates a first object image of any one of the multiple first images to the second object image, and obtains multiple second images. Therefore, the target scene is shot through the plurality of cameras at different visual angles, images of other visual angles can be obtained, the display effect is improved, and the user experience is improved.
Referring to fig. 4, fig. 4 is a flowchart illustrating an image processing method according to another embodiment of the present application. The image processing method is applied to the mobile terminal, and will be described in detail with respect to the flow shown in fig. 4, and the method may specifically include the following steps:
step S210: the mobile terminal shoots a target scene at different visual angles through the plurality of cameras simultaneously to obtain a plurality of first images.
Step S220: acquiring one or more objects in the target scene from the plurality of first images based on a preset object condition.
In this embodiment, the mobile terminal captures a target scene at different viewing angles through a plurality of cameras simultaneously to obtain a plurality of first images. Based on a preset object condition, the one or more objects in the target scene may be acquired from the plurality of first images, where the preset object condition may be a condition set by a system or preset by a user, the preset object condition may be a type of an object, for example, the preset object condition may be a person, and then the mobile terminal may identify and acquire all the persons in the target scene from the plurality of first images; the preset object condition may also be a color of an object, for example, the preset object condition may be green, and the mobile terminal may identify and acquire all green objects in the target scene from the plurality of first images.
Step S230: highlighting the one or more objects.
In this embodiment, when one or more objects are acquired based on the preset object condition, the one or more objects may be highlighted to facilitate the user to confirm the object meeting the preset object condition in the target scene. The one or more objects can be highlighted by selecting the one or more objects, and the one or more objects can be highlighted by darkening the color of the area in which the one or more objects are located.
Step S240: receiving a selection operation, determining the target object from the one or more objects based on the selection operation, and acquiring a first object image corresponding to the target object.
In this embodiment, the mobile terminal may identify and acquire one or more objects from the plurality of first images, and may highlight the one or more objects on the display interface. The user may then determine the target object from the one or more objects by performing a selection operation, for example, a touch operation may be performed on the display interface to select the target object. Further, an object located in the middle of the display interface may also be determined as a target object, and the focus position may also be identified, and an object corresponding to the focus position may be determined as the target object. After receiving the selection operation, the mobile terminal may determine a target object from the one or more objects based on the selection operation, and may locate a display area of the target object in the first image, thereby obtaining a first object image corresponding to the target object, where the first object image may be an image including only the target object and not including other areas except the target object.
Step S250: acquiring a parameter configuration instruction, updating the parameter information of the target object based on the parameter configuration instruction, and acquiring a second object image corresponding to the target object, wherein the first object image and the second object image have different corresponding visual angles.
For detailed description of step S250, please refer to step S130, which is not described herein.
Step S260: and updating a first object image of any one of the plurality of first images to the second object image to obtain a plurality of second images.
In this embodiment, a plurality of second object images corresponding to the target object are acquired, and the mobile terminal may update a first object image of any one of the plurality of first images to the second object image, so as to obtain a plurality of second images.
In some embodiments, the mobile terminal may further display the obtained plurality of second images, may display the plurality of second images on the same display interface, or may sequentially display the plurality of second images. Further, it is also possible to display any one of the plurality of second images and to display other ones of the plurality of second images by rotation based on the switching operation. Wherein the switching operation may be to rotate the mobile phone as shown in fig. 5 so that other images among the plurality of second images can be displayed. It is also possible to slide on the operation interface of the mobile terminal as shown in fig. 6, thereby switching the second image to display the other images of the plurality of second images.
In an image processing method provided by another embodiment of the present application, a mobile terminal captures a target scene at different viewing angles through a plurality of cameras simultaneously to obtain a plurality of first images, acquires one or more objects in the target scene from the plurality of first images based on a preset object condition, highlights the one or more objects, receives a selection operation, determines a target object from the one or more objects based on the selection operation, acquires a first object image corresponding to the target object, acquires a parameter configuration instruction, updates parameter information of the target object based on the parameter configuration instruction, acquires a second object image corresponding to the target object, the viewing angles of the first object image and the second object image are different, updates a first object image of any one of the plurality of first images to the second object image, and acquires a plurality of second images, which is different from the image processing method shown in fig. 1, the embodiment may further acquire one or more objects in the target scene from the first image based on a preset condition, and highlight the one or more objects, so that the user may select the target object more clearly from the one or more objects.
Referring to fig. 7, fig. 7 is a schematic flowchart illustrating an image processing method according to still another embodiment of the present application. The image processing method is applied to the mobile terminal, and will be described in detail with respect to the flow shown in fig. 7, and the method may specifically include the following steps:
step S310: the mobile terminal shoots a target scene at different visual angles through the plurality of cameras simultaneously to obtain a plurality of first images.
Step S320: one or more objects in the target scene are acquired based on the plurality of first images, a target object is determined from the one or more objects, and a first object image corresponding to the target object is acquired.
For the detailed description of steps S310 to S320, please refer to steps S110 to S120, which are not described herein again.
Step S330: and performing image content compensation on the target object based on a preset algorithm and the plurality of first images to obtain a target object image, wherein the view angles corresponding to the target object image comprise view angles corresponding to the plurality of first images and view angles other than the view angles corresponding to the plurality of first images.
In this embodiment, the mobile terminal simultaneously captures a plurality of first images through a plurality of cameras to obtain one or more objects in a target scene based on the plurality of first images, determines a target object from the one or more objects, and obtains a first object image corresponding to the target object, where the first object image corresponding to the target object only includes images of the target object at a part of viewing angles, that is, images of the mobile terminal at the viewing angles corresponding to the plurality of cameras and cannot include images of the target object at all viewing angles, for example, when the mobile terminal captures the front of the target object through the plurality of cameras, images of the front and a part of side surfaces of the target object can be obtained, and an image behind the target object is unknown. Therefore, the image of the target object under the viewing angle other than the viewing angle corresponding to the plurality of first images can be simulated through the preset algorithm, that is, the image of the target object under the viewing angle other than the viewing angle corresponding to the plurality of first images can be subjected to image content compensation through the preset algorithm, so that the image of the target object under the viewing angle corresponding to the plurality of first images and under the viewing angle other than the viewing angle corresponding to the plurality of first images can be obtained.
Further, in this embodiment, step S330 may include the following steps:
step S331A: and extracting partial characteristic parameters of the target object from the plurality of first images.
In this embodiment, after the target object is determined from the plurality of first images, a part of feature parameters of the target object may be extracted from the first images, where the feature parameters may include brightness, edges, textures, colors, and the like. Extracting partial characteristic parameters of the target object from the plurality of first images, graying the image of the region where the target object is located by positioning the region where the target object is located, dividing the image into small connected regions, then collecting direction histograms of gradients or edges of all pixel points in the connected regions, and finally combining the histograms to obtain the characteristic vector. As one mode, it is also possible to output the partial feature parameters of the target object extracted from the plurality of first images by inputting the image corresponding to the region where the target object is located in the plurality of first images into the neural network based on the neural network. The partial characteristic parameters of the target object may also be extracted by other ways based on the plurality of first images, which is not limited herein.
Step S332A: and identifying the object type of the target object according to the partial characteristic parameters, and inquiring the overall characteristic parameters of the target object based on the object type.
In this embodiment, a partial feature parameter of the target object may be extracted based on the plurality of first images, and if image content compensation is to be performed on the target object, an overall feature parameter of the target object needs to be acquired. The mobile terminal can identify the object type of the target object by comparing the characteristic parameters according to partial characteristic parameters based on an image identification technology, and further can query the whole characteristic parameters of the corresponding server or network target object based on the object type of the target object. For example, when the target object is an automobile, partial feature parameters of the automobile, such as color and outline, may be extracted from the plurality of first images, and the type of the automobile may be identified based on the extracted partial feature parameters, and if the automobile is identified as an "XX" brand automobile, the overall feature parameters of the automobile may be obtained by querying an official website of the brand.
Step S333A: and performing image content compensation on the target object based on the overall characteristic parameters to obtain the target object image.
In this embodiment, the correspondence between the partial feature parameters of the target object and the first images is determined based on the plurality of first images and the partial feature parameters of the target object in the plurality of first images, and after the overall feature parameters of the target object are obtained, the target object image may be determined based on the correspondence with the first images.
Further, in this embodiment, step S330 may further include the following steps:
step S331B: acquiring the parallax of the plurality of first images.
In this embodiment, two first images in the plurality of first images may be acquired, the two first images may be identified to acquire a correspondence between the two first images, and the parallax of the first image may be calculated based on a principle of triangulation. Further, the matching cost of the first image can be calculated based on a stereo matching algorithm, cost aggregation is performed, a point with the optimal overlapping matching cost is selected as a corresponding matching point in a certain range after aggregation of the matching cost is completed, the parallax corresponding to the matching point is the parallax of the first image, and a specific parallax calculation mode is not limited herein.
Step S332B: and acquiring angle information of a part to be subjected to image content compensation in the target object based on different parallaxes of the plurality of first images.
In this embodiment, the mobile terminal may determine, based on the different parallaxes of the plurality of first images, the parallax of the image corresponding to the target object from the perspective other than the perspective corresponding to the plurality of first images, and determine, based on the parallax, angle information of a portion of the target object to be subjected to image content compensation.
Step S333B: and matching corresponding image content based on the angle information and compensating the target object based on the image content to obtain the target object image.
In this embodiment, based on the parallax of the first view and the angle information of the portion to be subjected to image content compensation in the target object, a matching point is searched on the virtual view corresponding to the angle information, the pixel point content corresponding to the matching point is obtained, and the pixel point content is moved to the matching point, so that the virtual view, that is, the target object image, is obtained.
In an image processing method provided by another embodiment of the present application, a mobile terminal shoots a target scene at different viewing angles through a plurality of cameras at the same time to obtain a plurality of first images, obtains one or more objects in the target scene based on the plurality of first images, determines a target object from the one or more objects, obtains a first object image corresponding to the target object, and performs image content compensation on the target object based on a preset algorithm and the plurality of first images to obtain a target object image, where a viewing angle corresponding to the target object image includes a viewing angle corresponding to the plurality of first images and a viewing angle other than the viewing angle corresponding to the plurality of first images. In the embodiment, the image content compensation is performed on the target object through the preset algorithm to obtain the target object image, so that the image at any angle different from the first image shooting angle is obtained.
Referring to fig. 8, fig. 8 is a block diagram illustrating an image processing apparatus 200 according to an embodiment of the present disclosure. The image processing apparatus 200 is applied to the mobile terminal described above. As will be explained below with respect to the block diagram of fig. 8, the image processing apparatus 200 includes: a shooting module 210, an object obtaining module 220, an instruction obtaining module 230, and an updating module 240, wherein:
the shooting module 210 is configured to shoot a target scene at different viewing angles through the multiple cameras by the mobile terminal, so as to obtain multiple first images.
An object obtaining module 220, configured to obtain one or more objects in the target scene based on the plurality of first images, determine a target object from the one or more objects, and obtain a first object image corresponding to the target object.
Further, the object obtaining module 220 includes: the device comprises an acquisition submodule, a display submodule and a receiving submodule, wherein:
an obtaining sub-module, configured to obtain one or more objects in the target scene from the plurality of first images based on a preset object condition.
A display sub-module for highlighting the one or more objects.
The receiving submodule is used for receiving a selection operation, determining the target object from the one or more objects based on the selection operation, and acquiring a first object image corresponding to the target object.
The instruction obtaining module 230 is configured to obtain a parameter configuration instruction, update parameter information of the target object based on the parameter configuration instruction, and obtain a second object image corresponding to the target object, where viewing angles corresponding to the first object image and the second object image are different.
Further, the instruction obtaining module 230 further includes: the system comprises an instruction acquisition submodule and an image acquisition submodule, wherein:
and the instruction acquisition submodule is used for acquiring a parameter configuration instruction and acquiring a target rotation angle based on the parameter configuration instruction.
And the image obtaining submodule is used for obtaining a plurality of second object images corresponding to the target object in the rotating process based on the target rotating angle, and the corresponding visual angles of the plurality of second object images are different.
An updating module 240, configured to update a first object image of any one of the plurality of first images to the second object image, so as to obtain a plurality of second images.
Further, the image processing apparatus 200 may further include: first display module and second display module, wherein:
and the first display module is used for displaying the plurality of second images.
And the second display module is used for displaying any one of the second images and rotationally displaying other second images in the second images based on switching operation.
Further, the image processing apparatus 200 may further include: an image content compensation module, wherein:
and the image content compensation module is used for performing image content compensation on the target object based on a preset algorithm and the plurality of first images to obtain a target object image, wherein the visual angles corresponding to the target object image comprise the visual angles corresponding to the plurality of first images and the visual angles other than the visual angles corresponding to the plurality of first images.
Further, the image content compensation module may further include: an extraction submodule, an identification submodule and a first compensation submodule, wherein:
and the extraction sub-module is used for extracting partial characteristic parameters of the target object from the plurality of first images.
And the identification submodule is used for identifying the object type of the target object according to the partial characteristic parameters and inquiring the overall characteristic parameters of the target object based on the object type.
And the first compensation submodule is used for carrying out image content compensation on the target object based on the overall characteristic parameters to obtain the target object image.
Further, the image content compensation module may further include: the parallax error acquisition submodule, the angle information acquisition submodule and the second compensation submodule, wherein:
and the parallax acquisition sub-module is used for acquiring the parallaxes of the plurality of first images.
And the angle information acquisition submodule is used for acquiring the angle information of the part to be subjected to image content compensation in the target object based on different parallaxes of the plurality of first images.
And the second compensation submodule is used for matching corresponding image content based on the angle information and compensating the target object based on the image content to obtain the target object image.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 9, a block diagram of a mobile terminal 100 according to an embodiment of the present disclosure is shown. The mobile terminal 100 may be a smart phone, a tablet computer, an electronic book, or other mobile terminal capable of running an application. The mobile terminal 100 in the present application may include one or more of the following components: a processor 110, a memory 120, a screen 130, a camera 140, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform the methods as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores, among other things. The processor 110 interfaces with various components throughout the mobile terminal 100 using various interfaces and lines, and performs various functions of the mobile terminal 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and invoking data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal 100 in use, such as a phonebook, audio-video data, chat log data, and the like.
Further, the screen 130 may be a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The screen 130 is used to display information input by a user, information provided to the user, and various graphic user interfaces of the mobile terminal, which may be composed of graphics, text, icons, numbers, video, and any combination thereof.
The camera 140 may be fixedly disposed on the mobile terminal 100, may be slidably disposed on the mobile terminal 100, or may be rotatably disposed on the mobile terminal 100, which is not limited herein.
Referring to fig. 10, a block diagram of a computer-readable storage medium according to an embodiment of the present disclosure is shown. The computer-readable storage medium 300 has stored therein program code that can be called by a processor to execute the methods described in the above-described method embodiments.
The computer-readable storage medium 300 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable and programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 300 includes a non-volatile computer-readable storage medium. The computer readable storage medium 300 has storage space for program code 310 for performing any of the method steps described above. The program code can be read from or written to one or more computer program products. The program code 310 may be compressed, for example, in a suitable form.
To sum up, according to the image processing method, the image processing apparatus, the mobile terminal, and the storage medium provided in the embodiments of the present application, the mobile terminal captures a target scene through the plurality of cameras at different viewing angles at the same time to obtain a plurality of first images, obtains one or more objects in the target scene based on the plurality of first images, determines a target object from the one or more objects, obtains a first object image corresponding to the target object, obtains a parameter configuration instruction, updates parameter information of the target object based on the parameter configuration instruction, obtains a second object image corresponding to the target object, updates a first object image of any one of the plurality of first images to a second object image, and obtains a plurality of second images. According to the scheme, the target scene is shot by the plurality of cameras at different visual angles, images of target objects at different angles can be obtained, so that user operation is reduced, user experience is improved, images at other more visual angles can be obtained, display effect is improved, and user experience is improved.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (7)

1. An image processing method is applied to a mobile terminal, wherein the mobile terminal comprises a plurality of cameras, and the visual angles of the cameras are different, and the method comprises the following steps:
the mobile terminal shoots a target scene at different visual angles through the plurality of cameras at the same time to obtain a plurality of first images;
acquiring one or more objects in the target scene based on the plurality of first images, determining a target object from the one or more objects, and acquiring a first object image corresponding to the target object;
extracting partial characteristic parameters of the target object from the plurality of first images;
identifying the object type of the target object according to the partial characteristic parameters, acquiring a brand corresponding to the target object based on the object type, and inquiring the overall characteristic parameters of the target object in an official website of the brand according to the brand;
performing image content compensation on the target object based on the overall characteristic parameters to obtain a target object image, wherein the view angles corresponding to the target object image comprise view angles corresponding to the plurality of first images and view angles other than the view angles corresponding to the plurality of first images;
acquiring a parameter configuration instruction, updating parameter information of the target object based on the parameter configuration instruction, and acquiring a second object image corresponding to the target object, wherein the first object image and the second object image have different corresponding visual angles, and the parameter configuration instruction indicates that the angle of the target object is configured;
and updating a first object image of any one of the plurality of first images to the second object image to obtain a plurality of second images.
2. The method according to claim 1, wherein the obtaining the parameter configuration instruction, updating the parameter information of the target object based on the parameter configuration instruction, and obtaining a second object image corresponding to the target object comprises:
acquiring a parameter configuration instruction, and acquiring a target rotation angle based on the parameter configuration instruction;
and obtaining a plurality of second object images corresponding to the target object in the rotating process based on the target rotating angle, wherein the corresponding visual angles of the second object images are different.
3. The method according to claim 1, wherein after the updating the first object image of any one of the plurality of first images to the second object image and obtaining a plurality of second images, further comprising:
displaying the plurality of second images; or
Displaying any one of the plurality of second images, and rotationally displaying the other of the plurality of second images based on a switching operation.
4. The method of claim 1, wherein the acquiring one or more objects in the target scene based on the plurality of first images, determining a target object from the one or more objects, and acquiring a first object image corresponding to the target object comprises:
acquiring one or more objects in the target scene from the plurality of first images based on a preset object condition;
highlighting the one or more objects;
receiving a selection operation, determining the target object from the one or more objects based on the selection operation, and acquiring a first object image corresponding to the target object.
5. An image processing apparatus applied to a mobile terminal including a plurality of cameras having different viewing angles, the apparatus comprising:
the shooting module is used for shooting a target scene at different visual angles through the plurality of cameras by the mobile terminal to obtain a plurality of first images;
an object obtaining module, configured to obtain one or more objects in the target scene based on the plurality of first images, determine a target object from the one or more objects, and obtain a first object image corresponding to the target object;
an extraction sub-module, configured to extract partial feature parameters of the target object from the plurality of first images;
the identification submodule is used for identifying the object type of the target object according to the partial characteristic parameters, acquiring a brand corresponding to the target object based on the object type, and inquiring the integral characteristic parameters of the target object in an official website of the brand according to the brand;
a first compensation submodule, configured to perform image content compensation on the target object based on the global characteristic parameter to obtain a target object image, where a viewing angle corresponding to the target object image includes viewing angles corresponding to the plurality of first images and viewing angles other than the viewing angles corresponding to the plurality of first images
The instruction acquisition module is used for acquiring a parameter configuration instruction, updating parameter information of the target object based on the parameter configuration instruction, and acquiring a second object image corresponding to the target object, wherein the first object image and the second object image have different corresponding visual angles, and the parameter configuration instruction indicates that the angle of the target object is configured;
and the updating module is used for updating the first object image of any one of the plurality of first images into the second object image to obtain a plurality of second images.
6. A mobile terminal comprising a memory and a processor, the memory coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the method of any of claims 1-4.
7. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 4.
CN201910579229.8A 2019-06-28 2019-06-28 Image processing method, image processing device, mobile terminal and storage medium Active CN110266926B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910579229.8A CN110266926B (en) 2019-06-28 2019-06-28 Image processing method, image processing device, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910579229.8A CN110266926B (en) 2019-06-28 2019-06-28 Image processing method, image processing device, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110266926A CN110266926A (en) 2019-09-20
CN110266926B true CN110266926B (en) 2021-08-17

Family

ID=67923236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910579229.8A Active CN110266926B (en) 2019-06-28 2019-06-28 Image processing method, image processing device, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110266926B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111010599B (en) * 2019-12-18 2022-04-12 浙江大华技术股份有限公司 Method and device for processing multi-scene video stream and computer equipment
SG10201913955VA (en) 2019-12-31 2021-07-29 Sensetime Int Pte Ltd Image recognition method and apparatus, and computer-readable storage medium
CN112633300B (en) * 2020-12-30 2023-06-20 中国人民解放军国防科技大学 Multi-dimensional interactive image feature parameter extraction and matching method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080057151A (en) * 2006-12-19 2008-06-24 이문기 Rotating 3d display
CN101321299A (en) * 2007-06-04 2008-12-10 华为技术有限公司 Parallax generation method, generation cell and three-dimensional video generation method and device
JP2008312058A (en) * 2007-06-15 2008-12-25 Fujifilm Corp Imaging apparatus, imaging method, and program
CN101651841A (en) * 2008-08-13 2010-02-17 华为技术有限公司 Method, system and equipment for realizing stereo video communication
CN102222357A (en) * 2010-04-15 2011-10-19 温州大学 Foot-shaped three-dimensional surface reconstruction method based on image segmentation and grid subdivision
CN102780873A (en) * 2011-05-13 2012-11-14 索尼公司 Image processing apparatus and method
CN103345736A (en) * 2013-05-28 2013-10-09 天津大学 Virtual viewpoint rendering method
CN103763543A (en) * 2014-02-13 2014-04-30 北京大学 Collecting method of resultant hologram
CN106257909A (en) * 2015-06-16 2016-12-28 Lg电子株式会社 Mobile terminal and control method thereof
CN106464847A (en) * 2014-06-20 2017-02-22 歌乐株式会社 Image synthesis system, image synthesis device therefor, and image synthesis method
CN106998430A (en) * 2017-04-28 2017-08-01 北京瑞盖科技股份有限公司 360 degree of video playback methods based on polyphaser

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10778905B2 (en) * 2011-06-01 2020-09-15 ORB Reality LLC Surround video recording

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080057151A (en) * 2006-12-19 2008-06-24 이문기 Rotating 3d display
CN101321299A (en) * 2007-06-04 2008-12-10 华为技术有限公司 Parallax generation method, generation cell and three-dimensional video generation method and device
JP2008312058A (en) * 2007-06-15 2008-12-25 Fujifilm Corp Imaging apparatus, imaging method, and program
CN101651841A (en) * 2008-08-13 2010-02-17 华为技术有限公司 Method, system and equipment for realizing stereo video communication
CN102222357A (en) * 2010-04-15 2011-10-19 温州大学 Foot-shaped three-dimensional surface reconstruction method based on image segmentation and grid subdivision
CN102780873A (en) * 2011-05-13 2012-11-14 索尼公司 Image processing apparatus and method
CN103345736A (en) * 2013-05-28 2013-10-09 天津大学 Virtual viewpoint rendering method
CN103763543A (en) * 2014-02-13 2014-04-30 北京大学 Collecting method of resultant hologram
CN106464847A (en) * 2014-06-20 2017-02-22 歌乐株式会社 Image synthesis system, image synthesis device therefor, and image synthesis method
CN106257909A (en) * 2015-06-16 2016-12-28 Lg电子株式会社 Mobile terminal and control method thereof
CN106998430A (en) * 2017-04-28 2017-08-01 北京瑞盖科技股份有限公司 360 degree of video playback methods based on polyphaser

Also Published As

Publication number Publication date
CN110266926A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN110300264B (en) Image processing method, image processing device, mobile terminal and storage medium
EP3457683B1 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
CN110012209B (en) Panoramic image generation method and device, storage medium and electronic equipment
CN107911621B (en) Panoramic image shooting method, terminal equipment and storage medium
CN107613202B (en) Shooting method and mobile terminal
CN110266926B (en) Image processing method, image processing device, mobile terminal and storage medium
EP3125135A1 (en) Picture processing method and device
CN110288534B (en) Image processing method, device, electronic equipment and storage medium
CN109089043B (en) Shot image preprocessing method and device, storage medium and mobile terminal
CN112954210B (en) Photographing method and device, electronic equipment and medium
CN106454086B (en) Image processing method and mobile terminal
CN110572706B (en) Video screenshot method, terminal and computer-readable storage medium
US11551465B2 (en) Method and apparatus for detecting finger occlusion image, and storage medium
CN110009555B (en) Image blurring method and device, storage medium and electronic equipment
CN111131688B (en) Image processing method and device and mobile terminal
CN111818263A (en) Shooting parameter processing method and device, mobile terminal and storage medium
CN114298902A (en) Image alignment method and device, electronic equipment and storage medium
CN114926351B (en) Image processing method, electronic device, and computer storage medium
CN111787230A (en) Image display method and device and electronic equipment
CN110177216B (en) Image processing method, image processing device, mobile terminal and storage medium
CN111833234A (en) Image display method, image processing apparatus, and computer-readable storage medium
CN109919190B (en) Straight line segment matching method, device, storage medium and terminal
CN111353946B (en) Image restoration method, device, equipment and storage medium
CN110266955B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN107292901B (en) Edge detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant