CN117336458A - Image processing method, device, equipment and medium - Google Patents

Image processing method, device, equipment and medium Download PDF

Info

Publication number
CN117336458A
CN117336458A CN202311258808.5A CN202311258808A CN117336458A CN 117336458 A CN117336458 A CN 117336458A CN 202311258808 A CN202311258808 A CN 202311258808A CN 117336458 A CN117336458 A CN 117336458A
Authority
CN
China
Prior art keywords
image
input
images
shooting
shooting parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311258808.5A
Other languages
Chinese (zh)
Inventor
张弘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311258808.5A priority Critical patent/CN117336458A/en
Publication of CN117336458A publication Critical patent/CN117336458A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics

Abstract

The application discloses an image processing method, an image processing device, image processing equipment and a medium, and belongs to the technical field of electronic equipment. The image processing method applied to the head-mounted display device comprises the following steps: receiving a first input; adjusting a shooting parameter in response to the first input, wherein the shooting parameter comprises a focal length or a focusing point; receiving a second input; and displaying the image shot by the shooting parameters at the display positions corresponding to the shooting parameters in response to the second input.

Description

Image processing method, device, equipment and medium
Technical Field
The application belongs to the technical field of electronics, and particularly relates to an image processing method, device, equipment and medium.
Background
The head-mounted display device is a display device worn on the head of a user, for example, a Virtual Reality (VR) device, an augmented Reality (Augmented Reality, AR) device, and a Mixed Reality (MR) device, wherein the VR device is a head-mounted display device for blocking the vision and hearing of the user to the outside and guiding the user to feel a sense of being in a Virtual environment, the AR device is a head-mounted display device for fusing Virtual information and the real world, applying the Virtual information to the real world after simulation, and realizing "enhancement" of the real world, and the MR device is a head-mounted display device for introducing real scene information in the Virtual environment and setting up an interactive feedback information loop between the Virtual world, the real world, and the user to enhance the sense of Reality of the user experience.
In the related art, when displaying an image captured by a camera, a head-mounted display device generally displays the image captured by the camera directly, and cannot display the captured image stereoscopically.
Disclosure of Invention
An embodiment of the application aims to provide an image processing method, device, equipment and medium, which can solve the problem that a shot image cannot be displayed in a three-dimensional mode.
In a first aspect, an embodiment of the present application provides an image processing method, applied to a head-mounted display device, where the method includes:
receiving a first input;
adjusting a shooting parameter in response to the first input, wherein the shooting parameter comprises a focal length or a focusing point;
receiving a second input;
and displaying the image shot by the shooting parameters at the display positions corresponding to the shooting parameters in response to the second input.
In a second aspect, an embodiment of the present application provides an image processing apparatus, applied to a head-mounted display device, including:
a first receiving module for receiving a first input;
the adjusting module is used for responding to the first input and adjusting shooting parameters, wherein the shooting parameters comprise focal distance or focusing point;
a second receiving module for receiving a second input;
And the first display module is used for responding to the second input and displaying the image shot by the shooting parameters at the display positions corresponding to the shooting parameters.
In a third aspect, embodiments of the present application provide a head mounted display device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the image processing method as provided in the first aspect of embodiments of the present application.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method as provided in the first aspect of embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the image processing method as provided in the first aspect of the embodiments of the present application.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the steps of the image processing method as provided in the first aspect of embodiments of the present application.
In an embodiment of the application, a head mounted display device receives a first input; adjusting a shooting parameter in response to the first input, wherein the shooting parameter comprises a focal length or a focusing point; receiving a second input; and displaying the image shot by the shooting parameters at the display positions corresponding to the shooting parameters in response to the second input. In this way, the captured image can be displayed at the display position corresponding to the shooting parameter used when the head-mounted display device shoots, and the captured image can be displayed in a stereoscopic manner.
Drawings
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of adjusting shooting parameters according to an embodiment of the present application;
fig. 3 is a schematic diagram of triggered photographing and image display according to an embodiment of the present application;
FIG. 4 is a first schematic view of a display image provided in an embodiment of the present application;
FIG. 5 is a second schematic view of a display image provided in an embodiment of the present application;
FIG. 6 is a schematic illustration of moving and scaling images provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of image switching and selected images provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of image fusion provided by an embodiment of the present application;
FIG. 9 is a third schematic illustration of a display image provided in an embodiment of the present application;
fig. 10 is a schematic structural view of an image processing apparatus provided in an embodiment of the present application;
fig. 11 is a schematic structural diagram of a head-mounted display device provided in an embodiment of the present application;
fig. 12 is a schematic diagram of a hardware structure of a head-mounted display device implementing an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method, device, equipment and medium provided by the embodiment of the application are described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
It should be noted that the image processing method and apparatus provided in the embodiments of the present application are applied to a head-mounted display device, where the head-mounted display device in the embodiments of the present application includes, but is not limited to, a VR device (e.g., VR glasses), an AR device (e.g., AR glasses), an MR device (e.g., MR glasses), and so on.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application. The image processing method may include:
step 101: receiving a first input;
in some possible implementations of embodiments of the present application, the first input in embodiments of the present application is used to adjust a shooting parameter of the head-mounted display device. The first inputs in embodiments of the present application include, but are not limited to: the user inputs touch input, voice instruction input, specific gesture input and the like to the head-mounted display device through a touch device such as a finger or a handwriting pen; the touch input includes, but is not limited to, click input, sliding input, etc., where the click input may be single click input, double click input or any number of click inputs, and may also be long press input or short press input; the specific gesture input may be any one of a single tap gesture, a swipe gesture, a drag gesture, a long press gesture, an area change gesture, a double press gesture, a double tap gesture. The first input in the embodiment of the application can be set and adaptively modified according to actual requirements.
In some possible implementations of embodiments of the present application, a component (e.g., knob, button, slide, etc.) for adjusting a shooting parameter of the head-mounted display device may be mounted on a frame of the head-mounted display device, through which the head-mounted display device may receive a first input for adjusting the shooting parameter (e.g., a user rotating a knob for adjusting the shooting parameter of the head-mounted display device, a user pressing a button for adjusting the shooting parameter of the head-mounted display device, etc.), based on which the shooting parameter is adjusted.
Step 102: adjusting a photographing parameter of the head-mounted display device in response to the first input, wherein the photographing parameter includes a focal length or a focal point;
illustratively, the user's thumb is pressed against the frame of the VR glasses, sliding back and forth, and during the sliding, the focal length changes from small to large, and the field angle follows the range of view from large to small. As shown in fig. 2, fig. 2 is a schematic diagram of adjusting shooting parameters according to an embodiment of the present application.
Step 103: receiving a second input to the head mounted display device;
in some possible implementations of embodiments of the present application, the second input in embodiments of the present application is used to instruct the head-mounted display device to take a photograph and display the photographed image. The second inputs in embodiments of the present application include, but are not limited to: the user inputs touch input, voice instruction input, specific gesture input and the like to the head-mounted display device through a touch device such as a finger or a handwriting pen; the touch input includes, but is not limited to, click input, sliding input, etc., where the click input may be single click input, double click input or any number of click inputs, and may also be long press input or short press input; the specific gesture input may be any one of a single tap gesture, a swipe gesture, a drag gesture, a long press gesture, an area change gesture, a double press gesture, a double tap gesture. The second input in the embodiment of the application can be set and adaptively modified according to actual requirements.
In some possible implementations of embodiments of the present application, a component (e.g., a button) for photographing may be mounted on a frame of the head-mounted display device, and when the button is pressed by a user, the head-mounted display device receives a second input, photographs based on the second input, and displays the photographed image.
Illustratively, the user's thumb presses against the frame of the VR glasses, sliding the slider from back to front, and the photographing parameters change during the sliding. When scanning photographing is needed, a user uses the thumb to tap the eyeglass frame to trigger photographing and image display, wherein the photographed picture can be stored in association with the corresponding photographing parameter. Fig. 3 is a schematic diagram of triggered photographing and image display according to an embodiment of the present application, as shown in fig. 3.
Step 104: and displaying the image shot by the shooting parameters at the display positions corresponding to the shooting parameters in response to the second input.
It should be noted that, in the embodiment of the present application, the display position refers to a distance from the projection screen to the head-mounted display device, where the projection screen may be a virtual area displayed by an image, or a virtual interface displayed by the head-mounted display device, and the image is displayed on the virtual interface.
In an embodiment of the application, a head mounted display device receives a first input; adjusting a shooting parameter in response to the first input, wherein the shooting parameter comprises a focal length or a focusing point; receiving a second input; and displaying the image shot by the shooting parameters at the display positions corresponding to the shooting parameters in response to the second input. In this way, the captured image can be displayed at the display position corresponding to the shooting parameter used when the head-mounted display device shoots, and the captured image can be displayed in a stereoscopic manner.
In some possible implementations of the embodiments of the present application, by adjusting the shooting parameters multiple times, images corresponding to different shooting parameters can be obtained through shooting, and based on this, the images corresponding to different shooting parameters can be displayed simultaneously. The image processing method provided by the embodiment of the application may further include: when the shooting parameters are adjusted for a plurality of times, displaying a first image shot by the first shooting parameters at a first display position corresponding to the first shooting parameters, wherein the first shooting parameters are any one shooting parameters among a plurality of shooting parameters, and the shooting parameters are respectively corresponding to the plurality of times of adjustment.
In some possible implementations of the embodiments of the present application, when a user takes a plurality of photographs of different focal lengths, the photographs of different focal lengths may be presented at different display positions according to the depth of the focal length.
In some possible implementations of the embodiments of the present application, a display range corresponding to the display position may be preset. When a plurality of images obtained by photographing with the plurality of photographing parameters are displayed at the display positions corresponding to the plurality of photographing parameters, respectively, the display position at which the display range is smallest may be set as the display position corresponding to the smallest photographing parameter among the plurality of photographing parameters, and the display position at which the display range is largest may be set as the display position corresponding to the largest photographing parameter among the plurality of photographing parameters, and the display positions corresponding to photographing parameters other than the largest photographing parameter and the smallest photographing parameter among the plurality of photographing parameters may be proportionally corresponding to the display position in the display range.
Taking a photographing parameter as an example, the focal length is exemplified. The user adjusts the focal length to 10 millimeters (mm), 20mm and 40mm, respectively, and the display range corresponding to the display position set by the user is 1 meter (m) to 4m, then the image taken with the focal length of 10mm is displayed on the screen of the head-mounted display device 1 meter, the image taken with the focal length of 20mm is displayed on the screen of the head-mounted display device 2 meters, and the image taken with the focal length of 40mm is displayed on the screen of the head-mounted display device 4 meters.
In some possible implementations of the embodiments of the present application, when displaying a plurality of images captured at a plurality of focal lengths, the larger the focal length, the smaller the angle of view, due to the different focal lengths. When a plurality of images photographed at a plurality of focal lengths are displayed, the plurality of images may be image-aligned and the sizes of the plurality of images may be adjusted.
Illustratively, as shown in fig. 4, fig. 4 is a first schematic diagram of a display image provided in an embodiment of the present application. In fig. 4, 5 images taken at 5 focal lengths are displayed.
In the embodiment of the application, images obtained by shooting with different shooting parameters can be presented at different positions according to the shooting parameters.
In some possible implementations of the embodiments of the present application, displaying, at a first display position corresponding to a first shooting parameter, a first image obtained by shooting with the first shooting parameter may include: and displaying the first image shot by the first shooting parameters at a first display position corresponding to the first shooting parameters with a first transparency.
In some possible implementations of embodiments of the present application, the first transparency is less than 100% and may be set and adaptively modified according to actual requirements, for example, the first transparency is set to 50%.
In the embodiment of the application, a user can browse a plurality of images shot by different shooting parameters at one time.
In some possible implementations of the embodiments of the present application, the image processing method provided by the embodiments of the present application may further include: receiving a third input; in response to the third input, displaying a second image corresponding to the third input in a first display mode, and displaying a third image in a second display mode, wherein the second image is any one of a plurality of images obtained through shooting with a plurality of shooting parameters, the third image is an image except the second image in the plurality of images, and the first display mode and the second display mode are different.
In some possible implementations of the embodiments of the present application, the third input in the embodiments of the present application is used to trigger displaying, in the first display manner, the second image corresponding to the third input. The third inputs in embodiments of the present application include, but are not limited to: the user inputs touch input, voice instruction input, specific gesture input and the like to the head-mounted display device through a touch device such as a finger or a handwriting pen; the touch input includes, but is not limited to, click input, sliding input, etc., where the click input may be single click input, double click input or any number of click inputs, and may also be long press input or short press input; the specific gesture input may be any one of a single tap gesture, a swipe gesture, a drag gesture, a long press gesture, an area change gesture, a double press gesture, a double tap gesture. The third input in the embodiment of the application can be set and adaptively modified according to actual requirements.
In some possible implementations of the embodiments of the present application, when displaying a plurality of images captured with a plurality of capture parameters, a user may select an image that needs to be displayed in the first display mode through a knob or button or slide bar, etc. on the head-mounted display device.
In some possible implementations of embodiments of the present application, when displaying multiple images captured with multiple capture parameters, a user may press a left thumb under a support frame (e.g., a frame of VR or MR or AR glasses) of a head-mounted display device, and place an index finger over the support frame, with two fingers. The images to be displayed in the first display mode are selected by sliding the thumb back and forth on the support frame.
In some possible implementations of the embodiments of the present application, the first display manner in the embodiments of the present application includes, but is not limited to: the transparency is 100%, and the lines of the image frame are thickened or color added. The second display means includes, but is not limited to: no image border lines or no added color, etc. The first display mode and the second display mode in the embodiment of the application can be set and adaptively modified according to actual requirements.
In the embodiment of the present application, the second image is displayed in the second display mode, and the second image is displayed in the first display mode different from the second display mode, so that the second image can be distinguished from other images.
In some possible implementations of embodiments of the present application, in order to make the second image more prominent, the transparency of the third image may also be reduced when the transparency of the second image is increased. Illustratively, as shown in fig. 5, fig. 5 is a second schematic diagram of a display image provided in an embodiment of the present application.
In some possible implementations of embodiments of the present application, when the user selects an image to be displayed in the first display mode, the selected image may also be moved or zoomed by sliding the index finger left and right on the support frame of the head-mounted display device. Illustratively, as shown in fig. 6, fig. 6 is a schematic diagram of moving and scaling images provided by embodiments of the present application.
In some possible implementations of the embodiments of the present application, the image processing method provided by the embodiments of the present application may further include: receiving a fourth input to the virtual reality glasses; and in response to the fourth input, fusing M images corresponding to the fourth input to obtain a fourth image, wherein the M images are M images in a plurality of images obtained by shooting with a plurality of shooting parameters, and M is a positive integer greater than or equal to 2.
In some possible implementations of the embodiments of the present application, the fourth input in the embodiments of the present application is used to trigger fusing M images corresponding to the fourth input. Fourth inputs in embodiments of the present application include, but are not limited to: the user inputs touch input, voice instruction input, specific gesture input and the like to the head-mounted display device through a touch device such as a finger or a handwriting pen; the touch input includes, but is not limited to, click input, sliding input, etc., where the click input may be single click input, double click input or any number of click inputs, and may also be long press input or short press input; the specific gesture input may be any one of a single tap gesture, a swipe gesture, a drag gesture, a long press gesture, an area change gesture, a double press gesture, a double tap gesture. The fourth input in the embodiment of the present application may be set and adaptively modified according to actual requirements.
For example, assume that 5 images are taken at 5 focal lengths, the 5 images being image a, image B, image C, image D, and image E, respectively. The user selects image a, image C, and image E through a knob or button or slide on the frame of the head mounted display device, and then fuses image a, image C, and image E.
In some possible implementations of embodiments of the present application, a user may press a left thumb under a support frame (e.g., a frame of VR or MR or AR glasses) of a head-mounted display device, and a forefinger over the support frame, with two fingers. The images are switched by sliding the thumb back and forth on the support frame, and then the support frame is tapped by the index finger to select the images to be fused. Illustratively, as shown in fig. 7, fig. 7 is a schematic diagram of image switching and selecting images provided in an embodiment of the present application.
In some possible implementations of the embodiments of the present application, fusing M images corresponding to the fourth input to obtain a fourth image includes: image alignment is carried out on the M images; the sizes of M images after image alignment are adjusted, so that the sizes of the M images after image alignment are the same; dividing the M images with the adjusted sizes to obtain N partial images corresponding to each image in the M images with the adjusted sizes, wherein N is a positive integer greater than or equal to 2; determining a first target partial image corresponding to a first partial image from M first partial images, wherein the first partial image is any one of N partial images, and the first target partial image is a first partial image corresponding to a focus in the M first partial images; and fusing the N first target part images to obtain a fourth image.
In some possible implementations of embodiments of the present application, image alignment, also known as image registration, is a process of matching and overlaying two or more images acquired at different times, with different sensors (imaging devices) or under different conditions (weather, illuminance, camera position and angle, etc.). The embodiment of the present application is not limited to the manner in which the M images are aligned, and any available manner may be applied to the embodiment of the present application.
For example, assume that 4 images are taken at 4 focal lengths, the 4 images being image a, image B, image C, and image D, respectively. Image a, image B, image C, and image D are subjected to image object and adjusted to the same size, and then, the 4 images are each divided into 4 partial images, the 4 partial images being the 1 st partial image, the 2 nd partial image, the 3 rd partial image, and the 4 th partial image, respectively. For the 1 st partial image, determining that the corresponding target partial image is the 1 st partial image of the image A from 4 1 st partial images; for the 2 nd partial image, determining that the corresponding target partial image is the 2 nd partial image of the image B from 4 2 nd partial images; for the 3 rd partial image, determining that the corresponding target partial image is the 3 rd partial image of the image C from 4 3 rd partial images; for the 4 th partial image, determining the corresponding target partial image from the 4 th partial images as the 4 th partial image of the image D. The 1 st partial image of image a, the 2 nd partial image of image B, the 3 rd partial image of image C, and the 4 th partial image of image D are fused.
The embodiment of the present application is not limited to the manner in which the image fusion is performed, and any available manner may be applied to the embodiment of the present application.
In some possible implementations of the embodiments of the present application, when determining a first target partial image corresponding to a first partial image from M first partial images, the sharpness of the M first partial images may be compared by a sharpness comparison algorithm, and an image with the greatest sharpness in the M first partial images is used as the first target partial image corresponding to the first partial image.
As shown in fig. 8, for example. Fig. 8 is a schematic diagram of image fusion provided in an embodiment of the present application.
In the embodiment of the application, the image with clear parts can be obtained through image fusion.
In some possible implementations of embodiments of the present application, the shooting parameters include a focus point; step 104 may include: and displaying the object corresponding to the focusing point in the first image at a first display position corresponding to the first shooting parameter.
Illustratively, the user adjusts the focus point by a knob or button or slide on the frame of the head-mounted display device, assuming that image a is taken with object a as the focus point, image B is taken with object B as the focus point, and image C is taken with object C as the focus point. Object a in image a is displayed at a display position corresponding to object a as the focus point, object B in image B is displayed at a display position corresponding to object B as the focus point, and object C in image C is displayed at a display position corresponding to object C as the focus point.
Illustratively, as shown in fig. 9, fig. 9 is a third schematic diagram of a display image provided in an embodiment of the present application.
In the embodiment of the application, when a plurality of images shot by a plurality of focusing points are displayed, photos with different focal lengths are split according to focused scenes, and the focused scenes are displayed at different positions according to the depth of the distance.
In some possible implementations of embodiments of the present application, multiple images may be displayed with some transparency (e.g., 50%), at which point the multiple images appear transparent, enabling the user to see the entire scene at a glance.
According to the image processing method provided by the embodiment of the application, the execution subject can be an image processing device. In the embodiment of the present application, an image processing apparatus provided in the embodiment of the present application will be described by taking an example in which the image processing apparatus executes an image processing method.
Fig. 10 is a schematic structural diagram of an image processing apparatus provided in an embodiment of the present application. The image processing apparatus 1000 may include:
a first receiving module 1001 for receiving a first input;
an adjustment module 1002 for adjusting a shooting parameter in response to a first input, wherein the shooting parameter comprises a focal length or a focal point;
A second receiving module 1003 for receiving a second input;
the first display module 1004 is configured to display, in response to the second input, an image captured with the capturing parameter at a display position corresponding to the capturing parameter.
In an embodiment of the application, a head mounted display device receives a first input; adjusting a shooting parameter in response to the first input, wherein the shooting parameter comprises a focal length or a focusing point; receiving a second input; and displaying the image shot by the shooting parameters at the display positions corresponding to the shooting parameters in response to the second input. In this way, the captured image can be displayed at the display position corresponding to the shooting parameter used when the head-mounted display device shoots, and the captured image can be displayed in a stereoscopic manner.
In some possible implementations of the embodiments of the present application, the image processing apparatus 1000 provided in the embodiments of the present application may further include:
the second display module is used for displaying a first image shot by the first shooting parameter at a first display position corresponding to the first shooting parameter under the condition of adjusting the shooting parameter for a plurality of times, wherein the first shooting parameter is any one of a plurality of shooting parameters, and the shooting parameters are respectively corresponding to the plurality of times of adjustment.
In the embodiment of the application, images obtained by shooting with different shooting parameters can be presented at different positions according to the shooting parameters.
In some possible implementations of the embodiments of the present application, the second display module is specifically configured to:
and displaying the first image shot by the first shooting parameters at a first display position corresponding to the first shooting parameters with a first transparency.
In some possible implementations of the embodiments of the present application, the image processing apparatus 1000 provided in the embodiments of the present application may further include:
a third receiving module for receiving a third input;
and the third display module is used for responding to the third input, displaying a second image corresponding to the third input in a first display mode and displaying the third image in a second display mode, wherein the second image is any one of a plurality of images obtained through shooting with a plurality of shooting parameters, the third image is an image except the second image in the plurality of images, and the first display mode and the second display mode are different.
In some possible implementations of the embodiments of the present application, the image processing apparatus 1000 provided in the embodiments of the present application may further include:
a fourth receiving module for receiving a fourth input;
And the fusion module is used for responding to the fourth input, fusing M images corresponding to the fourth input to obtain a fourth image, wherein the M images are M images in a plurality of images obtained by shooting with a plurality of shooting parameters, and M is a positive integer greater than or equal to 2.
In some possible implementations of embodiments of the present application, the fusion module may include:
an alignment sub-module for performing image alignment on the M images;
the adjusting sub-module is used for adjusting the sizes of the M images after the image alignment so that the sizes of the M images after the image alignment are the same;
the segmentation submodule is used for segmenting the M images with the adjusted sizes to obtain N partial images corresponding to each image in the M images with the adjusted sizes, wherein N is a positive integer greater than or equal to 2;
the determining submodule is used for determining a first target partial image corresponding to the first partial image from M first partial images, wherein the first partial image is any one of N partial images, and the first target partial image is a first partial image corresponding to a focus in the M first partial images;
and the fusion sub-module is used for fusing the N first target partial images to obtain a fourth image.
In the embodiment of the application, the image with clear parts can be obtained through image fusion.
In some possible implementations of embodiments of the present application, the shooting parameters include a focus point; the second display module is specifically configured to:
and displaying the object corresponding to the focusing point in the first image at the first display position.
The image processing apparatus in the embodiment of the present application may be a component in a head-mounted display device, such as an integrated circuit or a chip.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The image processing apparatus provided in this embodiment of the present application can implement each process implemented by the embodiments of the image processing methods in fig. 1 to 9, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 11, the embodiment of the present application further provides a head-mounted display device 1100, including a processor 1101 and a memory 1102, where the memory 1102 stores a program or an instruction that can be executed on the processor 1101, and the program or the instruction implements each step of the embodiment of the image processing method provided in the embodiment of the present application when executed by the processor 1101, and can achieve the same technical effect, so that repetition is avoided and redundant description is omitted herein.
Fig. 12 is a schematic diagram of a hardware structure of a head-mounted display device implementing an embodiment of the present application.
The head mounted display device 1200 includes, but is not limited to: radio frequency unit 1201, network module 1202, audio output unit 1203, input unit 1204, sensor 1205, display unit 1206, user input unit 1207, interface unit 1208, memory 1209, and processor 1210.
Those skilled in the art will appreciate that head mounted display device 1200 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to processor 1210 via a power management system so as to perform functions such as managing charge, discharge, and power consumption via the power management system. The head mounted display device structure shown in fig. 12 does not constitute a limitation of the head mounted display device, and the head mounted display device may include more or less components than illustrated, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein the user input unit 1207 is configured to: receiving a first input;
the processor 1210 is configured to: adjusting a shooting parameter in response to the first input, wherein the shooting parameter comprises a focal length or a focusing point;
The user input unit 1207 is also for: receiving a second input;
the display unit 1206 is configured to: and displaying the image shot by the shooting parameters at the display positions corresponding to the shooting parameters in response to the second input.
In an embodiment of the application, a head mounted display device receives a first input; adjusting a shooting parameter in response to the first input, wherein the shooting parameter comprises a focal length or a focusing point; receiving a second input; and displaying the image shot by the shooting parameters at the display positions corresponding to the shooting parameters in response to the second input. In this way, the captured image can be displayed at the display position corresponding to the shooting parameter used when the head-mounted display device shoots, and the captured image can be displayed in a stereoscopic manner.
In some possible implementations of embodiments of the present application, the display unit 1206 may also be configured to: when the shooting parameters are adjusted for a plurality of times, displaying a first image shot by the first shooting parameters at a first display position corresponding to the first shooting parameters, wherein the first shooting parameters are any one shooting parameters among a plurality of shooting parameters, and the shooting parameters are respectively corresponding to the plurality of times of adjustment.
In some possible implementations of embodiments of the present application, the display unit 1206 may be specifically configured to:
and displaying the first image shot by the first shooting parameters at a first display position corresponding to the first shooting parameters with a first transparency.
In the embodiment of the application, a user can browse a plurality of images shot by different shooting parameters at one time.
In some possible implementations of embodiments of the present application, the user input unit 1207 may also be used to: receiving a third input;
the display unit 1206 may also be used to: in response to the third input, displaying a second image corresponding to the third input in a first display mode, and displaying a third image in a second display mode, wherein the second image is any one of a plurality of images obtained through shooting with a plurality of shooting parameters, the third image is an image except the second image in the plurality of images, and the first display mode and the second display mode are different.
In some possible implementations of embodiments of the present application, the user input unit 1207 may also be used to: receiving a fourth input;
processor 1210 may also be configured to: and in response to the fourth input, fusing M images corresponding to the fourth input to obtain a fourth image, wherein the M images are M images in a plurality of images obtained by shooting with a plurality of shooting parameters, and M is a positive integer greater than or equal to 2.
In some possible implementations of embodiments of the present application, processor 1210 may be configured to:
image alignment is carried out on the M images; the sizes of M images after image alignment are adjusted, so that the sizes of the M images after image alignment are the same; dividing the M images with the adjusted sizes to obtain N partial images corresponding to each image in the M images with the adjusted sizes, wherein N is a positive integer greater than or equal to 2; determining a first target partial image corresponding to a first partial image from M first partial images, wherein the first partial image is any one of N partial images, and the first target partial image is a first partial image corresponding to a focus in the M first partial images; and fusing the N first target part images to obtain a fourth image.
In the embodiment of the application, the image with clear parts can be obtained through image fusion.
In some possible implementations of embodiments of the present application, the shooting parameters include a focus point; the display unit 1206 specifically functions to: and displaying the object corresponding to the focusing point in the first image at the first display position.
It should be understood that in the embodiment of the present application, the input unit 1204 may include a graphics processor (Graphics Processing Unit, GPU) 12041 and a microphone 12042, and the graphics processor 12041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1206 may include a display panel 12061, and the display panel 12061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1207 includes at least one of a touch panel 12071 and other input devices 12072. The touch panel 12071 is also called a touch screen. The touch panel 12071 may include two parts, a touch detection device and a touch controller. Other input devices 12072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 1209 may be used to store software programs as well as various data. The memory 1209 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1209 may include volatile memory or nonvolatile memory, or the memory 1209 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1209 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1210 may include one or more processing units; optionally, processor 1210 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 1210.
The embodiment of the application further provides a readable storage medium, on which a program or an instruction is stored, where the program or the instruction implements each process of the image processing method embodiment provided in the embodiment of the application when being executed by a processor, and the same technical effects can be achieved, so that repetition is avoided, and no redundant description is provided herein.
Wherein the processor is a processor in the head-mounted display device described in the above embodiment. The readable storage medium includes a computer readable storage medium, and examples of the computer readable storage medium include a non-transitory computer readable medium such as ROM, RAM, magnetic or optical disk, and the like.
The embodiment of the application also provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the image processing method embodiment provided by the embodiment of the application, and the same technical effects can be achieved, so that repetition is avoided, and the repeated description is omitted.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application further provide a computer program product, which is stored in a storage medium, and the program product is executed by at least one processor to implement each process of the embodiments of the image processing method as provided in the embodiments of the present application, and achieve the same technical effects, so that repetition is avoided, and a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the image processing method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (10)

1. An image processing method, applied to a head-mounted display device, comprising:
receiving a first input;
adjusting a shooting parameter in response to the first input, wherein the shooting parameter comprises a focal length or a focal point;
receiving a second input;
and responding to the second input, and displaying the image shot by the shooting parameters at the display positions corresponding to the shooting parameters.
2. The method according to claim 1, wherein the method further comprises:
and under the condition of adjusting the shooting parameters for a plurality of times, displaying a first image shot by the first shooting parameters at a first display position corresponding to the first shooting parameters, wherein the first shooting parameters are any one of a plurality of shooting parameters, and the plurality of shooting parameters are shooting parameters respectively corresponding to the plurality of times of adjustment.
3. The method according to claim 2, wherein displaying the first image captured by the first capturing parameter at the first display position corresponding to the first capturing parameter includes:
and displaying the first image at the first display position with a first transparency.
4. The method according to claim 2, wherein the method further comprises:
receiving a third input;
and in response to the third input, displaying a second image corresponding to the third input in a first display mode, and displaying a third image in a second display mode, wherein the second image is any one of a plurality of images obtained by shooting with the plurality of shooting parameters, the third image is an image except the second image in the plurality of images, and the first display mode and the second display mode are different.
5. The method according to claim 2, wherein the method further comprises:
receiving a fourth input;
and in response to the fourth input, fusing M images corresponding to the fourth input to obtain a fourth image, wherein the M images are M images in a plurality of images obtained by shooting with the shooting parameters, and M is a positive integer greater than or equal to 2.
6. The method of claim 5, wherein fusing the M images corresponding to the fourth input to obtain a fourth image comprises:
image alignment is carried out on the M images;
The sizes of M images after image alignment are adjusted, so that the sizes of the M images after image alignment are the same;
dividing M images with adjusted sizes to obtain N partial images corresponding to each image in the M images with adjusted sizes, wherein N is a positive integer greater than or equal to 2;
determining a first target partial image corresponding to a first partial image from M first partial images, wherein the first partial image is any one of the N partial images, and the first target partial image is a first partial image corresponding to a focus in the M first partial images;
and fusing the N first target part images to obtain the fourth image.
7. The method of claim 2, wherein the photographing parameters include a focus point;
displaying a first image obtained by shooting with the first shooting parameters at a first display position corresponding to the first shooting parameters, wherein the first display position comprises:
and displaying the object corresponding to the focusing point in the first image at the first display position.
8. An image processing apparatus, characterized by being applied to a head-mounted display device, comprising:
A first receiving module for receiving a first input;
an adjustment module for adjusting a shooting parameter in response to the first input, wherein the shooting parameter includes a focal length or a focal point;
a second receiving module for receiving a second input;
and the first display module is used for responding to the second input and displaying the image shot by the shooting parameters at the display positions corresponding to the shooting parameters.
9. A head mounted display device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the image processing method of any of claims 1-7.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any of claims 1-7.
CN202311258808.5A 2023-09-26 2023-09-26 Image processing method, device, equipment and medium Pending CN117336458A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311258808.5A CN117336458A (en) 2023-09-26 2023-09-26 Image processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311258808.5A CN117336458A (en) 2023-09-26 2023-09-26 Image processing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117336458A true CN117336458A (en) 2024-01-02

Family

ID=89292493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311258808.5A Pending CN117336458A (en) 2023-09-26 2023-09-26 Image processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117336458A (en)

Similar Documents

Publication Publication Date Title
CN108986199B (en) Virtual model processing method and device, electronic equipment and storage medium
CN112492212B (en) Photographing method and device, electronic equipment and storage medium
CN111970456B (en) Shooting control method, device, equipment and storage medium
CN112135046A (en) Video shooting method, video shooting device and electronic equipment
CN112887617B (en) Shooting method and device and electronic equipment
CN112532808A (en) Image processing method and device and electronic equipment
WO2022161240A1 (en) Photographing method and apparatus, electronic device, and medium
CN112188097B (en) Photographing method, photographing apparatus, terminal device, and computer-readable storage medium
US20230362477A1 (en) Photographing method and apparatus, electronic device and readable storage medium
CN112738397A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112532881A (en) Image processing method and device and electronic equipment
CN114422692A (en) Video recording method and device and electronic equipment
CN113923350A (en) Video shooting method and device, electronic equipment and readable storage medium
CN112839166A (en) Shooting method and device and electronic equipment
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN112954197B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112333395B (en) Focusing control method and device and electronic equipment
CN117336458A (en) Image processing method, device, equipment and medium
CN111782053B (en) Model editing method, device, equipment and storage medium
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium
CN114500852B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN115242976A (en) Shooting method, shooting device and electronic equipment
CN115589457A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN117528250A (en) Multimedia file processing method, multimedia file processing device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination