WO2024131669A1 - 拍摄处理方法和电子设备 - Google Patents

拍摄处理方法和电子设备 Download PDF

Info

Publication number
WO2024131669A1
WO2024131669A1 PCT/CN2023/139163 CN2023139163W WO2024131669A1 WO 2024131669 A1 WO2024131669 A1 WO 2024131669A1 CN 2023139163 W CN2023139163 W CN 2023139163W WO 2024131669 A1 WO2024131669 A1 WO 2024131669A1
Authority
WO
WIPO (PCT)
Prior art keywords
filter
image
display interface
virtual display
target
Prior art date
Application number
PCT/CN2023/139163
Other languages
English (en)
French (fr)
Inventor
区杰俊
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2024131669A1 publication Critical patent/WO2024131669A1/zh

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems

Definitions

  • the present application belongs to the field of image shooting technology, and specifically relates to a shooting processing method and electronic equipment.
  • the purpose of the embodiments of the present application is to provide a shooting processing method and an electronic device, which are used to solve the problem that users cannot accurately perceive the changes in the filtered image compared with the original image, and the filtered image taken with the filter is difficult to meet the user's expectations.
  • an embodiment of the present application provides a shooting processing method, which is applied to a first device, the first device can display a virtual display interface, and the first device is communicatively connected with a second device, the method comprising:
  • the original image and the filtered image are displayed on the virtual display interface, wherein the filtered image is an image obtained by filtering the original image.
  • an embodiment of the present application provides a shooting processing method, which is applied to a second device, the second device is communicatively connected to a first device, and the first device can display a virtual display interface, the method comprising:
  • the original image is sent to the first device to display the original image and the filter image on the virtual display interface of the first device, wherein the filter image is an image obtained by filtering the original image.
  • an embodiment of the present application provides a shooting processing device, which is applied to a first device, the first device can display a virtual display interface, and the first device is communicatively connected with a second device, the device comprising:
  • An image acquisition module used to acquire the original image captured by the second device
  • an embodiment of the present application provides a shooting processing device, which is applied to a second device, the second device is communicatively connected to a first device, and the first device can display a virtual display interface, the device comprising:
  • An image acquisition module used for acquiring original images
  • An image sending module sends the original image to the first device to display the original image and the filter image on the virtual display interface of the first device, wherein the filter image is an image of the original image after being processed by a filter.
  • an embodiment of the present application provides an electronic device, comprising a processor and a memory, wherein the memory stores programs or instructions that can be run on the processor, and when the program or instructions are executed by the processor, the steps of the method described in the first aspect are implemented, or the steps of the method described in the second aspect are implemented.
  • an embodiment of the present application provides a readable storage medium, on which a program or instruction is stored.
  • the program or instruction is executed by the processor, the steps of the method described in the first aspect are implemented, or the steps of the method described in the second aspect are implemented.
  • an embodiment of the present application provides a chip, which includes a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the steps of the method described in the first aspect, or to implement the steps of the method described in the second aspect.
  • an embodiment of the present application provides a computer program product, which is stored in a storage medium and is executed by at least one processor to implement the steps of the method described in the first aspect, or to implement the steps of the method described in the second aspect.
  • the virtual display interface of the first device is used to simultaneously display the original image captured by the second device and the filtered image after the original image is processed by the filter, so that the user can accurately perceive the image difference between the original image before and after the filter processing, and then determine whether the filter effect of the currently selected filter style meets expectations, which can enhance the user experience during the filter shooting process.
  • FIG1 is a schematic diagram of an AR technology display principle provided by an embodiment of the present application.
  • FIG3 is a schematic diagram of a virtual display interface provided in an embodiment of the present application.
  • FIG4 is a schematic diagram of an interaction between an AR device and a terminal device provided in an embodiment of the present application.
  • FIG6 is a flow chart of a shooting processing method applied to a second device provided in an embodiment of the present application.
  • FIG7 is a schematic diagram of the structure of a shooting processing device applied to a first device provided in an embodiment of the present application
  • FIG8 is a schematic diagram of the structure of a shooting processing device applied to a second device provided in an embodiment of the present application;
  • FIG9 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of the hardware structure of an electronic device implementing an embodiment of the present application.
  • first, second, etc. in the specification and claims of this application are used to distinguish similar objects, and are not used to describe a specific order or sequence. It should be understood that the data used in this way can be interchangeable under appropriate circumstances, so that the embodiments of the present application can be implemented in an order other than those illustrated or described here, and the objects distinguished by "first”, “second”, etc. are generally of one type, and the number of objects is not limited.
  • the first object can be one or more.
  • “and/or” in the specification and claims represents at least one of the connected objects, and the character “/" generally indicates that the objects associated with each other are in an "or” relationship.
  • Augmented Reality (AR) device refers to a device that uses AR technology to display images.
  • AR devices include at least a micro-projection system and an optical display.
  • the display scheme of AR technology is shown in Figure 1.
  • the micro-projection system projects virtual information such as text and images onto optical elements, and then sends the virtual information to the human eye through reflection and/or total reflection.
  • the real scene in the real world can directly enter the human eye through the optical element, which allows users to see the "overlap" of virtual and reality, thereby realizing augmented reality.
  • the AR device can be AR glasses.
  • FIG 2 is a flow chart of a shooting and processing method provided in an embodiment of the present application.
  • the shooting and processing method shown in Figure 2 is applied to a first device, the first device can display a virtual display interface, and the first device is communicatively connected with a second device.
  • the first device may be an AR device
  • the second device may be a terminal device.
  • the shooting processing method includes the following steps:
  • Step 101 Acquire an original image captured by the second device.
  • the original image captured by the second device can be understood as: an image that is not processed by a filter and is captured in real time after the second device turns on the shooting function, or an image that is not processed by a filter and is captured in the current period after the second device turns on the shooting function.
  • the current period is an image capture period including the current moment, and the second device captures an image in each image capture period.
  • the duration of a single image capture period can be 0.01 seconds, 0.5 seconds, etc.
  • the second device sends the captured original image to the first device, and at this time, the first device can obtain the original image captured by the second device.
  • Step 102 Display the original image and the filtered image on the virtual display interface.
  • the filtered image is the image obtained by filtering the original image.
  • the virtual display interface of the first device is used to simultaneously display the original image captured by the second device and the filtered image after the original image is processed by the filter, so that the user can accurately perceive the image difference between the original image before and after the filter processing, and then determine whether the filter effect of the currently selected filter style meets the expectations, which can enhance the user experience during the filter shooting process.
  • the second imaging area of the virtual display interface of the first device in the human eye is larger. Therefore, in addition to using the virtual display interface to display the original image and the filtered image, the virtual display interface can also be used to display the shooting information included in the actual display interface (for example: filter style, etc.), so as to avoid the user's line of sight switching back and forth between the actual display interface and the virtual display interface, so that the user can get a better filter shooting experience.
  • the shooting information included in the actual display interface for example: filter style, etc.
  • the original image and the filtered image are arranged adjacent to each other on the interface.
  • SLAM Simultaneous Localization and Mapping
  • the filter style used by the filter image may be a default filter style predefined by the second device or the first device, or the filter style used by the filter image may be a filter style selected by a user controlling the first device or the second device.
  • displaying the original image and the filtered image on the virtual display interface includes:
  • At least two of the filter images and the original image are displayed on the virtual display interface, wherein the filter styles adopted by the filter images are different.
  • the second device can switch between multiple supported filter styles in sequence, and use the currently switched filter style to filter the original image to generate a filtered image corresponding to the currently switched filter style.
  • the second device can transmit multiple filter images corresponding to multiple filter styles supported by the second device to the first device by one-to-one transmission; the second device can also transmit multiple filter images corresponding to multiple filter styles supported by the second device to the first device by package transmission; the second device can also transmit multiple filter images corresponding to multiple filter styles supported by the second device to the first device by segmented transmission.
  • the aforementioned one-by-one transmission method needs to perform 4 transmission operations
  • the aforementioned package transmission method needs to perform 1 transmission operation
  • the aforementioned segmented transmission method needs to perform 2 transmission operations (assuming that the number of filter images that can be transmitted in a single segment is 2).
  • the at least two filter images mentioned above can be set The number of images is less than the total number of filter styles supported by the second device to reduce the storage space occupied by the filter images on the first device.
  • the second device transmits the filter images to the first device in a one-by-one or segmented manner.
  • the number of the at least two filter images mentioned above can be set to be equal to the total number of filter styles supported by the second device to reduce the transmission frequency of the filter images.
  • the second device transmits the filter images to the AR in a packaged transmission manner.
  • the number of filter images displayed on the virtual display interface is equal to N; and when the area of the virtual display interface is limited, the number of filter images displayed on the virtual display interface is less than N.
  • the original image is preferably set in the middle position of at least two of the filter images to facilitate the user to compare the image differences between the original image and at least two of the filter images.
  • the image numbered 1 in Figure 3 is the original image
  • the images numbered 2-9 in Figure 3 are at least two of the filter images.
  • displaying at least two of the filter images on the virtual display interface can improve the user's comparison efficiency of different filter styles, simplify the user's filter style selection process, improve the user's filter style selection effect, and further optimize the imaging effect of the filter image after shooting using the filter.
  • the shooting processing method further includes:
  • the first target filter pattern is sent to the second device, so that the second device performs filter processing on the original image according to the first target filter pattern.
  • Displaying two filter styles can facilitate users to observe and compare at least two filter styles, which can further enhance the user's shooting experience.
  • displaying at least two filter styles on the virtual display interface includes:
  • the receiving a first input from a user comprises:
  • the determining a first target filter pattern among the at least two filter patterns comprises:
  • the first filter pattern is determined as the first target filter pattern.
  • the first filter style can be any one of at least two filter styles.
  • the first filter style is determined as the first target filter style; that is, when the residence time of the target operation object in a certain display area is greater than or equal to the first threshold, the filter style corresponding to the display area is determined as the first target filter style.
  • the selection of filter style is completed through interactive operations with a virtual display interface with a larger display range, which facilitates the user's filter style selection operation, avoids the user's line of sight from frequently switching between the virtual display interface and the actual display interface, and improves the user's filter shooting experience.
  • the target operation object is an object perceived by the first device.
  • the first device can determine the user's control intention.
  • the target operation object can be the user's finger.
  • the gesture control setting does not require the middleware for adapting to the first device sensor, so the user's filter shooting operation can be more flexible.
  • middleware for example: a control pen associated with the first device sensor
  • the target operation object can be the part of the middleware that can be recognized by the first device sensor.
  • the setting of the above middleware can improve the first device's perception accuracy of the target operation object, thereby reducing the probability of the first device misidentifying the user's operation intention.
  • 401 shown in Figure 4 can be understood as the first device
  • 402 shown in Figure 4 can be understood as the second device
  • 403 shown in Figure 4 can be understood as the wireless connection between the first device and the second device
  • 407 shown in Figure 4 can be understood as a virtual display interface, which includes a filter style option interface 404, an unfiltered original image 405, and a filtered image 406 after the original image is filtered.
  • the target operation object in FIG4 is the user's finger.
  • the filter style option interface 404 in FIG4 includes 6 filter styles, each filter style corresponds to a display area. As shown in FIG4, the user's finger currently stays in the display area corresponding to filter 5. As described above, if the user's finger stays in the display area corresponding to filter 5 for a time greater than or equal to the first threshold, the first device will determine the filter style indicated by filter 5 as the first target filter style. After the second device generates a filter image corresponding to filter 5 based on the first target filter style, the first device will receive the filter image corresponding to filter 5 transmitted by the second device, and then the first device will display the filter image corresponding to filter 5 on the virtual display interface.
  • the aforementioned first threshold can be adaptively selected according to actual needs, for example, 1 second. This application does not limit the specific value of the first threshold.
  • the shooting processing method further includes:
  • the second target filter pattern is sent to the second device, so that the second device performs an image capturing operation according to the second target filter pattern.
  • a second threshold with a longer duration is set to complete the perception of the user's intention to shoot with a filter.
  • This not only enables the user to complete the control of the second device in a touchless manner, avoiding the jitter interference of the user's touch screen operation on the second device, and improving the imaging quality of the original image currently captured by the second device, thereby improving the imaging quality of the image after the original image currently captured is filtered based on the second target filter style, but also ensures that the user can perceive the filter processing effect of the second target filter style selected for filter shooting before executing the filter shooting operation, thereby reducing the problem of misoperation (such as the first device mistakenly perceiving the stay position of the target operation object, or external interference causing the stay position of the target operation object to be misaligned with the stay position expected by the user), and avoiding the situation where the second target filter style selected by the user does not match the expected filter effect, which can further enhance the user
  • the shooting processing method further includes:
  • the third target filter pattern adopted by the target filter image is sent to the second device, so that the second device performs an image capturing operation according to the third target filter pattern.
  • At least two of the filter images are displayed on the virtual display interface so that the user does not need to perform a filter switching operation.
  • the first device determines a target filter image from the at least two filter images in response to the second input of the user, and sends the third target filter style adopted by the target filter image to the first device.
  • Two devices can complete the user's filter shooting on the second device, which further improves the efficiency of the user controlling the second device to perform filter shooting.
  • displaying at least two of the filter images on the virtual display interface includes:
  • the receiving a second input from the user comprises:
  • the determining a target filter image from the at least two filter images comprises:
  • the first filtered image is determined as the target filtered image.
  • the virtual display interface into multiple display areas, and setting each display area to uniquely correspond to a filter image, it is possible to adapt to the perceived operation of the user's control intention, wherein the first filter image can be any one of at least two of the filter images.
  • the first filter image is determined as the target filter image; that is, when the residence time of the target operation object in a certain display area is greater than or equal to the third threshold, the filter image corresponding to the display area is determined as the target filter image.
  • the third target filter style used by the target filter image is sent to the second device, allowing the user to complete the control of the second device in a touchless manner, avoiding the jitter interference of the user's touch screen operation on the second device, and improving the imaging quality of the original image currently captured by the second device, thereby improving the imaging quality of the image after the current captured original image is filtered based on the third target filter style.
  • the image numbered 1 in FIG5 is the original image
  • the images numbered 2-9 in FIG5 are at least two of the filter images
  • the target operation object in FIG5 is the user.
  • the user's finger, the current stay position of the target operation object in Figure 5 is located in the display area corresponding to the filter image numbered 2. If the stay time of the target operation object is greater than or equal to the third threshold, the first device will determine the filter image numbered 2 as the target filter image.
  • the third threshold value can be adaptively selected according to actual needs, for example, 1 second, 2 seconds, etc.
  • the fourth target filter pattern is sent to the second device, so that the second device performs an image capturing operation and/or an image filtering operation based on the fourth target filter pattern.
  • the user's filter shooting intention or filter selection intention is perceived by voice recognition, thereby enhancing the flexibility of user operation, so as to further improve the applicability of the method described in this application in complex scenarios (such as scenarios where the user holds the second device with both hands), and enhance the user's shooting experience.
  • the aforementioned user voice information may be a voice keyword predefined by the first device.
  • the user voice information may be "apply filter 1 to shoot”.
  • the voice keywords are "shoot” and "filter 1".
  • the second device will perform an image capture operation based on filter style 1.
  • the first device and the second device establish a connection via a wireless network (such as WIFI or Bluetooth).
  • a wireless network such as WIFI or Bluetooth.
  • the user turns on the camera function of the second device.
  • the real scene currently captured by the camera of the second device i.e., the original image currently captured is transmitted to the first device via the wireless network and stored in the memory of the first device.
  • the first device After the first device applies the SLAM algorithm to process the real scene currently captured by the camera of the second device and the information of multiple filter style options, it can be displayed in the virtual display interface as shown in FIG. A filter style option interface 404 and an unfiltered original image 405 are shown.
  • the first device determines that the user selects the filter style corresponding to the style display area. At this time, the first device will determine the filter style corresponding to the style display area as the first target filter style, and send the first target filter style to the second device, so that the second device filters the currently captured original image based on the first target filter style, and thereby obtains a filtered image corresponding to the first target filter style.
  • the second device synchronizes the filtered image to the first device, and the first device displays the filtered image on the virtual display interface through SLAM technology.
  • the display content of the virtual display interface can be shown in Figure 4, including a filter style option interface 404, an unfiltered original image 405, and a filtered image 406 after the original image is processed by filtering.
  • the user can determine the filter effect of the filter style corresponding to the currently displayed filter image by comparing the image differences between the original image and the filtered image. After the user determines the filter style, he can move his finger to the style display area corresponding to the determined filter style and stay there for 2 seconds.
  • the first device will control the second device through the wireless network to perform the image shooting operation based on the second target filter style determined by the user.
  • the first device will first determine filter style 1 as the first target filter style, and then determine filter style 1 as the second target filter style, that is, before the second device performs the image capture operation according to filter style 1, the second device will first perform the filter processing operation according to filter style 1.
  • the second device sequentially switches different filter styles based on the currently acquired original image, and transmits different filter images obtained by processing different filter styles to the first device via a wireless network.
  • the first device The differently filtered images are stored in a memory of the first device.
  • the first device displays the aforementioned different filter images and original images on a virtual display interface based on the SLAM algorithm.
  • the display content of the virtual display interface can be shown in Figure 5, including images numbered 1-9, among which the image numbered 1 can be understood as the original image, and the images numbered 2-9 can be understood as images with different filters.
  • the user can determine the filter effects of the filter styles corresponding to the currently displayed filter images by comparing the image differences between the original image and the filtered image, as well as the image differences between the different filter images. After the user selects the filter style, the user can move his finger to the image display area corresponding to the selected filter style and stay there for 1 second.
  • the first device will control the second device through the wireless network to perform the image capture operation based on the filter style selected by the user.
  • FIG. 6 is a flow chart of another shooting processing method provided in an embodiment of the present application.
  • the shooting processing method shown in FIG. 6 is applied to a second device, the second device is communicatively connected with a first device, and the first device can display a virtual display interface.
  • the shooting processing method includes the following steps:
  • the second imaging area of the virtual display interface in the human eye is larger. Therefore, the original image and the filtered image are displayed simultaneously through the first device, and the shooting information of the associated second device (such as The imaging area of the original image, filtered image, and filter style, etc. in the human eye allows users to have a better shooting experience.
  • the shooting processing method further includes:
  • the original image is filtered according to the first target filter pattern to obtain the filtered image.
  • the shooting processing method further includes:
  • An image capturing operation is performed according to the second target filter pattern.
  • the shooting and processing method further includes:
  • the at least two filtered images are sent to the first device, so that the first device displays the original image and the at least two filtered images on the virtual display interface.
  • the shooting processing method further includes:
  • the third target filter style is a filter style adopted by a target filter image determined by the first device from at least two filter images based on a second input of a user;
  • An image capturing operation is performed according to the third target filter pattern.
  • An image capturing operation and/or an image filtering operation is performed according to the fourth target filter pattern.
  • a shooting processing device 700 is applied to a first device, the first device can display a virtual display interface, the first device is communicatively connected with a second device, and the shooting processing device 700 includes:
  • An image acquisition module 701 is used to acquire an original image captured by the second device
  • the image display module 702 is used to display the original image and the filtered image on the virtual display interface, wherein the filtered image is the image after the original image is processed by filtering.
  • the device 700 further includes:
  • a style display module used to display at least two filter styles on the virtual display interface
  • a first receiving module used to receive a first input from a user
  • a first pattern determination module configured to determine a first target filter pattern from among the at least two filter patterns in response to the first input
  • the first sending module is used to send the first target filter pattern to the second device, so that the second device performs filter processing on the original image according to the first target filter pattern.
  • the image display module 702 includes:
  • a first display unit configured to display a first filter pattern in a first display area of the virtual display interface, and to display a second filter pattern in a second display area of the virtual display interface;
  • the first receiving module is specifically used for:
  • the first style determination module is specifically used to:
  • the first filter pattern is determined as the first target filter pattern.
  • the device 700 further includes:
  • a second style determination module configured to determine the first filter style as a second target filter style when the residence time of the first position in the first display area is greater than or equal to a preset second threshold, and the duration of the second threshold is greater than the duration of the first threshold;
  • the second sending module is used to send the second target filter pattern to the second device, so that the second device performs an image capturing operation according to the second target filter pattern.
  • the image display module 702 includes:
  • the second display unit is used to display at least two of the filter images and the original image on the virtual display interface, wherein the filter styles adopted by each of the filter images are different.
  • the device 700 further includes:
  • a second receiving module used to receive a second input from a user
  • a filter image determination module configured to determine a target filter image from among the at least two filter images in response to the second input
  • the third sending module is used to send a third target filter style adopted by the target filter image to the second device, so that the second device performs an image capturing operation according to the third target filter style.
  • the second display unit is specifically used to:
  • the second receiving module is specifically used for:
  • the filter image determination module is specifically used for:
  • the first filtered image is determined as the target filtered image.
  • the device 700 further includes:
  • Voice acquisition module used to obtain user voice information
  • a voice recognition module used for recognizing the user voice information to obtain a fourth target filter style
  • the fourth sending module is used to send the fourth target filter pattern to the second device, so that the second device performs an image capturing operation and/or an image filtering operation based on the fourth target filter pattern.
  • the virtual display interface of the first device is used to simultaneously display the original image captured by the second device and the filtered image after the original image is processed by the filter, so that the user can determine the filter effect by comparing the image difference between the original image before and after the filter processing, and optimize the imaging effect of the filtered image after taking the filter.
  • the second imaging area of the virtual display interface in the human eye is larger. Therefore, by simultaneously displaying the original image and the filter image through the AR device, the imaging area of the shooting information of the associated terminal device (such as the original image, filter image, and filter style, etc.) in the human eye can be expanded, allowing users to obtain a better shooting experience.
  • the imaging area of the shooting information of the associated terminal device such as the original image, filter image, and filter style, etc.
  • the shooting processing device applied to the first device in the embodiment of the present application can be an electronic device or a component in the electronic device, such as an integrated circuit or a chip.
  • the electronic device can be a terminal or other devices other than a terminal.
  • the electronic device can be a mobile phone, a tablet computer, a laptop computer, a PDA, a vehicle-mounted electronic device, a mobile Internet device (Mobile Internet Device, MID), an augmented reality (augmented reality, AR)/virtual reality (virtual reality, VR) device, a robot, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant, PDA), etc.
  • NAS Network Attached Storage
  • PC personal computer
  • TV television
  • teller machine a self-service machine
  • the shooting processing device applied to the first device in the embodiment of the present application may be a device having an operating system.
  • the operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
  • the shooting and processing device applied to the first device provided in the embodiment of the present application can implement each process implemented by the method embodiments of Figures 2 to 5. To avoid repetition, they are not described here.
  • the shooting processing method applied to the second device provided in the embodiment of the present application can be executed by a shooting processing device applied to the second device.
  • the shooting processing device applied to the second device is taken as an example to execute the shooting processing method applied to the second device to illustrate the shooting processing device applied to the second device provided in the embodiment of the present application.
  • a shooting processing device 800 is applied to a second device, the second device is connected to a first device for communication, and the first device can display a virtual display interface.
  • the shooting processing device 800 includes:
  • An image acquisition module 801 is used to acquire original images
  • the image sending module 802 sends the original image to the first device to display the original image and the filtered image on the virtual display interface of the first device, wherein the filtered image is the image after the original image is processed by filtering.
  • the device 800 further includes:
  • a style sending module configured to send at least two filter styles to the first device, so that the first device displays the at least two filter styles on the virtual display interface
  • a first receiving module configured to receive a first target filter pattern determined by the first device from among the at least two filter patterns
  • the first filter processing module is used to perform filter processing on the original image according to the first target filter style to obtain the filtered image.
  • the device 800 further includes:
  • a second receiving module configured to receive a second target filter pattern determined by the first device from among the at least two filter patterns
  • the first shooting module is used to perform an image shooting operation according to the second target filter pattern.
  • the device 800 further includes:
  • a second filter processing module configured to perform filter processing on the original image to obtain at least two filter images, wherein each filter image adopts a different filter style
  • the image transmission module is used to send the at least two filtered images to the first device, so that the first device displays the original image and the at least two filtered images on the virtual display interface.
  • the device 800 further includes:
  • a third receiving module configured to receive a third target filter style sent by the first device, wherein the third target filter style is a filter style adopted by a target filter image determined by the first device from at least two filter images based on a second input of a user;
  • the second shooting module is used to perform an image shooting operation according to the third target filter pattern.
  • the device 800 further includes:
  • a fourth receiving module receives a fourth target filter pattern obtained by the first device recognizing the user voice information
  • An image processing module is used to perform an image capturing operation and/or an image filtering operation according to the fourth target filter pattern.
  • the virtual display interface of the first device is used to simultaneously display the original image captured by the second device and the filtered image after the original image is processed by the filter, so that the user can determine the filter effect by comparing the image difference between the original image before and after the filter processing, and optimize the imaging effect of the filtered image after taking the filter.
  • the second imaging area of the virtual display interface in the human eye is larger. Therefore, by simultaneously displaying the original image and the filter image through the AR device, the imaging area of the shooting information of the associated terminal device (such as the original image, filter image, and filter style, etc.) in the human eye can be expanded, allowing users to obtain a better shooting experience.
  • the imaging area of the shooting information of the associated terminal device such as the original image, filter image, and filter style, etc.
  • the shooting processing device applied to the second device in the embodiment of the present application may be an electronic device. It can be a component in an electronic device, such as an integrated circuit or a chip.
  • the electronic device can be a terminal, or it can be other devices other than a terminal.
  • the electronic device can be a mobile phone, a tablet computer, a laptop computer, a PDA, a vehicle-mounted electronic device, a mobile Internet device (Mobile Internet Device, MID), an augmented reality (augmented reality, AR)/virtual reality (virtual reality, VR) device, a robot, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant, PDA), etc.
  • It can also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a television (television, TV), a teller machine or a self-service machine, etc., and the embodiments of the present application are not specifically limited.
  • Network Attached Storage NAS
  • PC personal computer
  • TV television
  • teller machine a self-service machine
  • the shooting processing device applied to the second device in the embodiment of the present application may be a device having an operating system.
  • the operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiment of the present application.
  • the shooting processing device 800 applied to the second device provided in the embodiment of the present application can implement each process implemented by the method embodiment of Figure 6. To avoid repetition, it will not be repeated here.
  • an embodiment of the present application also provides an electronic device 900, including a processor 901 and a memory 902, and the memory 902 stores a program or instruction that can be executed on the processor 901.
  • the program or instruction When the program or instruction is executed by the processor 901, it implements the various steps of the above-mentioned shooting processing method embodiment applied to the first device, or implements the various steps of the above-mentioned shooting processing method embodiment applied to the second device, and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • the electronic devices in the embodiments of the present application include the mobile electronic devices and non-mobile electronic devices mentioned above.
  • FIG. 10 is a schematic diagram of the hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 100 includes but is not limited to components such as a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
  • components such as a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
  • the electronic device 100 may also include a power source (such as a battery) for supplying power to each component, and the power source may be logically connected to the processor 110 through a power management system, so that the power management system can manage charging, discharging, and power consumption.
  • a power source such as a battery
  • the electronic device structure shown in FIG10 does not constitute a limitation on the electronic device, and the electronic device may include more or fewer components than shown, or combine certain components, or arrange components differently, which will not be described in detail here.
  • the input unit 104 may include a graphics processing unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 processes the image data of a static picture or video obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode.
  • the display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, etc.
  • the user input unit 107 includes a touch panel 1071 and at least one of other input devices 1072.
  • the touch panel 1071 is also called a touch screen.
  • the touch panel 1071 may include two parts: a touch detection device and a touch controller.
  • Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control key, a switch key, etc.), a trackball, a mouse, and a joystick, which will not be repeated here.
  • the memory 109 can be used to store software programs and various data.
  • the memory 109 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instructions required for at least one function (such as a sound playback function, an image playback function, etc.), etc.
  • the memory 109 may include a volatile memory or a non-volatile memory, or the memory x09 may include both volatile and non-volatile memories.
  • the non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
  • Volatile memory can be random access memory (RAM), static random access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDRSDRAM), Enhanced SDRAM (ESDRAM), Synch link DRAM (SLDRAM) and Direct Rambus RAM (DRRAM).
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • DDRSDRAM synchronous dynamic random access memory
  • ESDRAM Enhanced SDRAM
  • SLDRAM Synch link DRAM
  • DRRAM Direct Rambus RAM
  • the memory 109 in the embodiment of the present application includes but is not limited to these
  • the processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor and a modem processor, wherein the application processor mainly processes operations related to an operating system, a user interface, and application programs, and the modem processor mainly processes wireless communication signals, such as a baseband processor. It is understandable that the modem processor may not be integrated into the processor 110.
  • An embodiment of the present application also provides a readable storage medium, on which a program or instruction is stored.
  • a program or instruction is stored.
  • the various processes of the above-mentioned method embodiment applied to the first device are implemented, or the various processes of the above-mentioned method embodiment applied to the second device are implemented, and the same technical effect can be achieved. To avoid repetition, it will not be repeated here.
  • the processor is the processor in the electronic device described in the above embodiment.
  • the readable storage medium includes a computer readable storage medium, such as a computer read-only memory ROM, a random access memory RAM, a magnetic disk or an optical disk.
  • An embodiment of the present application further provides a chip, which includes a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the various processes of the above-mentioned method embodiment applied to the first device, or to implement the various processes of the above-mentioned method embodiment applied to the second device, and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • the chip mentioned in the embodiments of the present application can also be called a system-level chip, a system chip, a chip system or a system-on-chip chip, etc.
  • An embodiment of the present application provides a computer program product, which is stored in a storage medium.
  • the program product is executed by at least one processor to implement the various processes of the above-mentioned method embodiment applied to the first device, or to implement the various processes of the above-mentioned method embodiment applied to the second device, and can achieve the same technical effect. To avoid repetition, it is not repeated here.
  • the technical solution of the present application can be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, a magnetic disk, or an optical disk), and includes a number of instructions for a terminal (which can be a mobile phone, a computer, a server, or a network device, etc.) to execute the methods described in each embodiment of the present application.
  • a storage medium such as ROM/RAM, a magnetic disk, or an optical disk
  • a terminal which can be a mobile phone, a computer, a server, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种拍摄处理方法和电子设备,属于拍摄技术领域。所述方法应用于第一设备,所述第一设备可展示虚拟显示界面,所述第一设备与第二设备通信连接,所述方法包括:获取所述第二设备采集的原始图像;在所述虚拟显示界面展示所述原始图像和滤镜图像,其中,所述滤镜图像为所述原始图像经过滤镜处理后的图像。

Description

拍摄处理方法和电子设备
相关申请的交叉引用
本申请要求在2022年12月21日提交中国专利局、申请号为202211650154.6、发明名称为“拍摄处理方法和电子设备”的中国专利申请的优先权,该中国专利申请的全部内容通过引用包含于此。
技术领域
本申请属于图像拍摄技术领域,具体涉及一种拍摄处理方法和电子设备。
背景技术
随着智能终端的拍照功能日益完善,用户使用智能终端进行拍照的频率也愈来愈高,尤其是在加载滤镜功能后,通过在色彩和感光方面的图像渲染,令用户的使用体验得到了进一步提升。
目前,由于终端显示屏的尺寸有限,拍摄过程中只能显示一张预览图像。这样,使得智能终端在选用滤镜进行拍摄过程中,终端仅显示当前选择的滤镜样式对应的滤镜图像,用户无法准确感知与原图像相比滤镜图像所发生的变化,从而导致使用滤镜拍摄后的滤镜图像难以符合用户预期。
发明内容
本申请实施例的目的是提供一种拍摄处理方法和电子设备,用于解决用户无法准确感知与原图像相比滤镜图像所发生的变化,使用滤镜拍摄后的滤镜图像难以符合用户预期的问题。
第一方面,本申请实施例提供了一种拍摄处理方法,应用于第一设备,所述第一设备可展示虚拟显示界面,所述第一设备与第二设备通信连接,该方法包括:
获取所述第二设备采集的原始图像;
在所述虚拟显示界面展示所述原始图像和滤镜图像,其中,所述滤镜图像为所述原始图像经过滤镜处理后的图像。
第二方面,本申请实施例提供了一种拍摄处理方法,应用于第二设备,所述第二设备与第一设备通信连接,所述第一设备可展示虚拟显示界面,该方法包括:
采集原始图像;
将所述原始图像发送至所述第一设备,以在所述第一设备的所述虚拟显示界面展示所述原始图像和滤镜图像,其中,所述滤镜图像为所述原始图像经过滤镜处理后的图像。
第三方面,本申请实施例提供了一种拍摄处理装置,应用于第一设备,所述第一设备可展示虚拟显示界面,所述第一设备与第二设备通信连接,该装置包括:
图像获取模块,用于获取所述第二设备采集的原始图像;
图像显示模块,用于在所述虚拟显示界面展示所述原始图像和滤镜图像,其中,所述滤镜图像为所述原始图像经过滤镜处理后的图像。
第四方面,本申请实施例提供了一种拍摄处理装置,应用于第二设备,所述第二设备与第一设备通信连接,所述第一设备可展示虚拟显示界面,所述装置包括:
图像采集模块,用于采集原始图像;
图像发送模块,将所述原始图像发送至所述第一设备,以在所述第一设备的所述虚拟显示界面展示所述原始图像和滤镜图像,其中,所述滤镜图像为所述原始图像经过滤镜处理后的图像。
第五方面,本申请实施例提供了一种电子设备,该电子设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤,或者,实现如第二方面所述的方法的步骤。
第六方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤,或者,实现如第二方面所述的方法的步骤。
第七方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法的步骤,或者,实现如第二方面所述的方法的步骤。
第八方面,本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如第一方面所述的方法的步骤,或者实现如第二方面所述的方法的步骤。
在本申请实施例中,应用第一设备的虚拟显示界面对第二设备采集的原始图像和所述原始图像经过滤镜处理后的滤镜图像进行同时展示,使用户能准确感知原始图像在滤镜处理前后的图像差异,进而确定当前选择的滤镜样式的滤镜效果是否符合预期,这能提升用户在滤镜拍摄过程中的使用体验。
附图说明
图1是本申请实施例提供的一种AR技术显示原理的示意图;
图2是本申请实施例提供的一种应用于第一设备的拍摄处理方法的流程图;
图3是本申请实施例提供的一种虚拟显示界面的示意图;
图4是本申请实施例提供的一种AR设备和终端设备交互的示意图;
图5是本申请实施例提供的另一种虚拟显示界面的示意图;
图6是本申请实施例提供的一种应用于第二设备的拍摄处理方式的流程图;
图7是本申请实施例提供的一种应用于第一设备的拍摄处理装置的结构示意图;
图8是本申请实施例提供的一种应用于第二设备的拍摄处理装置的结构示意图;
图9是本申请实施例提供的一种电子设备的结构示意图;
图10为实现本申请实施例的一种电子设备的硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
为方便理解,在此将本申请实施例涉及的部分名词进行解释说明:
增强现实(Augmented Reality,AR)设备:是指应用AR技术进行图像显示的设备。AR设备至少包括微投影系统以及光学显示器,AR技术的显示方案如图1所示,微投影系统将文字、图像等虚拟的信息投影到光学元件上,然后再通过反射和/或全反射的方式将虚拟的信息送到人眼,而真实世界中的现实场景,可以透过光学元件直接进入人眼,这使得用户可以看到虚拟和现实的“重叠”,以此实现增强现实。示例性的,所述AR设备可以为AR眼镜。
终端设备:是指具备拍摄功能,且与所述AR设备通信连接(如蓝牙连接或WIFI连接)的设备;例如:智能手机。
参见图2,图2是本申请实施例提供的一种拍摄处理方法的流程示意图,图2所示拍摄处理方法应用于第一设备,所述第一设备可展示虚拟显示界面,所述第一设备与第二设备通信连接。
示例性的,所述第一设备可以为AR设备,所述第二设备可以为终端设备。
如图2所示,所述拍摄处理方法包括以下步骤:
步骤101、获取所述第二设备采集的原始图像。
第二设备采集的原始图像可以理解为:第二设备开启拍摄功能后实时采集的未经过滤镜处理的图像,或者,第二设备开启拍摄功能后在当前时段采集的未经过滤镜处理的图像。其中,当前时段为包括当前时刻的图像采集时段,第二设备在每一图像采集时段内拍摄一张图像,示例性的,单个图像采集时段的时长可以为0.01秒、0.5秒等。第二设备将采集到的原始图像发送给第一设备,此时,第一设备可以获取第二设备采集的原始图像。
步骤102、在所述虚拟显示界面展示所述原始图像和滤镜图像。
其中,所述滤镜图像为所述原始图像经过滤镜处理后的图像。
应用第一设备的虚拟显示界面对第二设备采集的原始图像和所述原始图像经过滤镜处理后的滤镜图像进行同时展示,使用户能准确感知原始图像在滤镜处理前后的图像差异,进而确定当前选择的滤镜样式的滤镜效果是否符合预期,这能提升用户在滤镜拍摄过程中的使用体验。
相较于第二设备的实际显示界面在人眼内的第一成像区域来说,第一设备的虚拟显示界面在人眼内的第二成像区域的面积更大,因此,除了应用虚拟显示界面对原始图像和滤镜图像进行展示外,还可以应用所述虚拟显示界面对所述实际显示界面包括的拍摄信息(例如:滤镜样式等)进行展示,避免用户视线在实际显示界面和虚拟显示界面之间来回切换,令用户获得更优的滤镜拍摄体验。
为便利用户观察原始图像在滤镜处理前后的图像差异,可以在虚拟显示 界面上对所述原始图像和所述滤镜图像进行相邻设置。
进一步的,可以应用同时定位与地图创建(Simultaneous Localization and Mapping,SLAM)算法实现所述原始图像和所述滤镜图像在虚拟显示界面上的显示操作,以避免AR图像显示区域和实体图像显示区域出现重叠,使用户能获得较佳的滤镜观看体验。其中,AR图像显示区域为所述原始图像和滤镜图像在虚拟显示界面对应的区域,实体图像显示区域为所述第二设备的显示屏在虚拟显示界面对应的区域。
在一些实施方式中,所述滤镜图像采用的滤镜样式可以为第二设备或第一设备预先定义的默认滤镜样式,或者,所述滤镜图像采用的滤镜样式可以为用户操控所述第一设备或者第二设备选择的滤镜样式。
可选的,在所述虚拟显示界面展示所述原始图像和滤镜图像,包括:
在所述虚拟显示界面展示至少两个所述滤镜图像和所述原始图像,其中,各个所述滤镜图像采用的滤镜样式不同。
在用户开启第二设备的滤镜拍摄模式后,第二设备可对支持的多个滤镜样式进行依次切换,并采用当前所切换的滤镜样式对原始图像进行滤镜处理,以生成对应当前所切换的滤镜样式的滤镜图像。
第二设备可通过逐一传输的方式,将与第二设备支持的多个滤镜样式一一对应的多个滤镜图像传输给第一设备;第二设备也可通过打包传输的方式,将与第二设备支持的多个滤镜样式一一对应的多个滤镜图像传输给第一设备;第二设备还可通过分段传输的方式,将与第二设备支持的多个滤镜样式一一对应的多个滤镜图像传输给第一设备。
示例性的,若设定与第二设备支持的多个滤镜样式一一对应的多个滤镜图像的数目为4,为完成4个滤镜图像的传输,前述逐一传输的方式需执行4次传输操作,前述打包传输的方式需执行1次传输操作,前述分段传输的方式需执行2次传输操作(假定单段可传输的滤镜图像数目为2)。
在第一设备存储空间有限的情况下,可以设置前述至少两个所述滤镜图 像的数量小于所述第二设备支持的滤镜样式的总数目,以降低滤镜图像对第一设备的存储空间占用,此时第二设备采用逐一传输或分段传输的方式向第一设备传输滤镜图像。
在第一设备存储空间充足的情况下,可以设置前述至少两个所述滤镜图像的数量等于所述第二设备支持的滤镜样式的总数目,以降低滤镜图像的传输频次,此时第二设备采用打包传输的方式向AR传输滤镜图像。
在虚拟显示界面的面积充裕的情况下,虚拟显示界面上显示的滤镜图像的数目等于N;而在虚拟显示界面的面积有限的情况下,虚拟显示界面上显示的滤镜图像数目小于N。
需要说明的是,优选将所述原始图像设置于至少两个所述滤镜图像中的中间位置,以便利用户对原始图像和至少两个所述滤镜图像的图像差异进行比较,如图3所示,图3中的编号为1的图像即为原始图像,图3中的编号2-9的图像即为至少两个所述滤镜图像。
如上,在虚拟显示界面上显示至少两个所述滤镜图像,可提升用户对不同滤镜样式的比对效率,简化用户的滤镜样式选择流程,提升用户的滤镜样式选择效果,进一步优化使用滤镜拍摄后的滤镜图像的成像效果。
可选的,在所述虚拟显示界面展示所述原始图像和滤镜图像之前,所述拍摄处理方法还包括:
在所述虚拟显示界面上展示至少两个滤镜样式;
接收用户的第一输入;
响应于所述第一输入,在所述至少两个滤镜样式中确定第一目标滤镜样式;
将所述第一目标滤镜样式发送至所述第二设备,以使所述第二设备根据所述第一目标滤镜样式对所述原始图像进行滤镜处理。
由于虚拟显示界面在人眼内的第二成像区域的面积显著大于第二设备的实际显示界面在人眼内的第一成像区域,因此,通过在虚拟显示界面对至少 两个滤镜样式进行显示,可便利用户对至少两个滤镜样式的观测比较,这能进一步提升用户的拍摄体验。
示例性的,用户可通过触控操作、手势操作或语音命令等方式实现对第一设备的第一输入。
需要说明的是,应用中,虚拟显示界面上显示的所述至少两个滤镜样式的数目小于或等于所述第二设备支持的滤镜样式总数量,以适配虚拟显示界面的空间布局需求,这能提升本申请所述方法在复杂场景下的适用性;举例来说,在所述第二设备支持的滤镜样式总数量为10的情况下,虚拟显示界面上显示的所述至少两个滤镜样式的数目可以为3、5、7等。
可选的,所述在所述虚拟显示界面上展示至少两个滤镜样式,包括:
在所述虚拟显示界面的第一显示区域展示第一滤镜样式,在所述虚拟显示界面的第二显示区域展示第二滤镜样式;
所述接收用户的第一输入,包括:
接收用户通过目标操作对象的第一输入;
所述在所述至少两个滤镜样式中确定第一目标滤镜样式,包括:
检测所述目标操作对象在所述虚拟显示界面的第一位置;
当所述第一位置在所述第一显示区域的停留时间大于或等于预设的第一阈值时,将所述第一滤镜样式确定为所述第一目标滤镜样式。
如上,通过在虚拟显示界面划分多个显示区域,并设定每一显示区域唯一对应一个滤镜样式,以适配对用户操控意图的感知操作,其中,第一滤镜样式可以为至少两个滤镜样式中的任意一个滤镜样式。
经由检测目标操作对象在所述第一显示区域的停留时间,以实现对用户操控意图的感知,并在所述第一位置在所述第一显示区域的停留时间大于或等于预设的第一阈值时,将所述第一滤镜样式确定为所述第一目标滤镜样式;也即当目标操作对象在某一显示区域的停留时间大于或等于第一阈值时,将该显示区域所对应滤镜样式确定为第一目标滤镜样式。
通过与显示范围更大的虚拟显示界面进行的交互操作来完成滤镜样式的选择,便利用户的滤镜样式选择操作,避免用户视线在虚拟显示界面和实际显示界面之间进行频繁的切换,提升用户的滤镜拍摄体验。
所述目标操作对象为第一设备感知的对象,经由对目标操作对象的感知与识别,使得第一设备可确定用户操控意图;举例来说,在用户采用手势操控第一设备时,所述目标操作对象可以为用户手指,手势操控的设置由于无需携带用于适配第一设备传感器的中间件,因此能令用户的滤镜拍摄操作更加灵活;而在用户采用中间件(例如:与第一设备传感器关联的操控笔)操控第一设备时,所述目标操作对象可以为所述中间件的可被所述第一设备传感器识别的部分,上述中间件的设置可提升第一设备对目标操作对象的感知准确性,进而降低第一设备错误识别用户操作意图的概率。
如图4所示,图4中所示401可理解为第一设备,图4中所示402可理解为第二设备,图4中所示403可理解为第一设备和第二设备之间的无线连接,图4中所示407可理解为虚拟显示界面,该虚拟显示界面包括滤镜样式选项界面404、未经滤镜的原始图像405以及原始图像滤镜后的滤镜图像406。
图4中的目标操作对象为用户手指,图4中滤镜样式选项界面404包括6个滤镜样式,每一滤镜样式对应一个显示区域,如图4所示,用户手指当前停留于滤镜5对应的显示区域中,如上所述,若用户手指在滤镜5对应的显示区域的停留时间大于或等于第一阈值,则第一设备会将根据滤镜5指示的滤镜样式确定为第一目标滤镜样式。在第二设备基于所述第一目标滤镜样式而生成对应滤镜5的滤镜图像后,第一设备将会接收第二设备传输的对应滤镜5的滤镜图像,随后,第一设备在虚拟显示界面将对应滤镜5的滤镜图像进行显示。
前述第一阈值可根据实际需求进行适应性选择,例如1秒,本申请对第一阈值的具体取值不作限定。
可选的,所述检测所述目标操作对象在所述虚拟显示界面的第一位置之 后,所述拍摄处理方法还包括:
当所述第一位置在所述第一显示区域的停留时间大于或等于预设的第二阈值时,将所述第一滤镜样式确定为第二目标滤镜样式,所述第二阈值的时长大于所述第一阈值的时长;
将所述第二目标滤镜样式发送至所述第二设备,以使所述第二设备根据所述第二目标滤镜样式执行图像拍摄操作。
如上,在感知用户的滤镜选择意图的基础上,通过设定时长更大的第二阈值,以完成对用户的滤镜拍摄意图的感知,这不仅能令用户能通过无触屏方式完成对第二设备的操控,避免用户触屏操作对第二设备的抖动干扰,提升第二设备当前采集的原始图像的成像质量,进而提升基于第二目标滤镜样式对当前采集的原始图像进行滤镜滤镜后的图像的成像质量,还能保障用户在执行滤镜拍摄操作前,可先行感知其所选择的用于滤镜拍摄的第二目标滤镜样式的滤镜处理效果,降低误操作问题(如第一设备错误感知目标操作对象的停留位置,或者,外因干扰使得目标操作对象的停留位置与用户期望的停留位置错位),避免用户选择的第二目标滤镜样式与其预期的滤镜效果不匹配的情况,这能进一步提升用户的拍摄体验。
可选的,所述在所述虚拟显示界面展示至少两个所述滤镜图像和所述原始图像之后,所述拍摄处理方法还包括:
接收用户的第二输入;
响应于所述第二输入,在所述至少两个所述滤镜图像中确定目标滤镜图像;
将所述目标滤镜图像采用的第三目标滤镜样式发送至所述第二设备,以使所述第二设备根据所述第三目标滤镜样式执行图像拍摄操作。
在虚拟显示界面显示至少两个所述滤镜图像,使得用户无需进行滤镜切换操作,此时,第一设备通过响应用户的第二输入,在至少两个所述滤镜图像中确定目标滤镜图像,并目标滤镜图像采用的第三目标滤镜样式发送至第 二设备,即可完成用户对第二设备的滤镜拍摄,这进一步提升了用户操控第二设备进行滤镜拍摄的效率。
可选的,所述在所述虚拟显示界面展示至少两个所述滤镜图像,包括:
在所述虚拟显示界面的第三显示区域展示第一滤镜图像,在所述虚拟显示界面的第四显示区域展示第二滤镜图像;
所述接收用户的第二输入,包括:
接收用户通过目标操作对象的第二输入;
所述在所述至少两个所述滤镜图像中确定目标滤镜图像,包括:
检测所述目标操作对象在所述虚拟显示界面的第二位置;
当所述第二位置在所述第三显示区域的停留时间大于或等于预设的第三阈值时,将所述第一滤镜图像确定为所述目标滤镜图像。
如上,通过在虚拟显示界面划分多个显示区域,并设定每一显示区域唯一对应一个滤镜图像,以适配对用户操控意图的感知操作,其中,第一滤镜图像可以为至少两个所述滤镜图像中的任意一个滤镜图像。
经由检测目标操作对象在所述第三显示区域的停留时间,以实现对用户操控意图的感知,并在所述第二位置在所述第三显示区域的停留时间大于或等于预设的第一阈值时,将所述第一滤镜图像确定为所述目标滤镜图像;也即当目标操作对象在某一显示区域的停留时间大于或等于第三阈值时,将该显示区域所对应滤镜图像确定为目标滤镜图像。
在确定目标滤镜图像后,将目标滤镜图像采用的第三目标滤镜样式发送至第二设备,令用户得以通过无触屏方式完成对第二设备的操控,避免用户触屏操作对第二设备的抖动干扰,提升第二设备当前采集的原始图像的成像质量,进而提升基于第三目标滤镜样式对当前采集的原始图像进行滤镜处理后的图像的成像质量。
示例性的,如图5所示,图5中的编号为1的图像即为原始图像,图5中的编号2-9的图像即为至少两个所述滤镜图像,图5中目标操作对象为用 户手指,图5中目标操作对象的当前停留位置位于编号2的滤镜图像对应的显示区域中,若该目标操作对象的停留时间大于或等于第三阈值,则第一设备会将根据编号2的滤镜图像确定为目标滤镜图像。
所述第三阈值可根据实际需求进行适应性选择,例如,1秒、2秒等。
可选的,所述拍摄处理方法还包括:
获取用户语音信息;
对所述用户语音信息进行识别,得到第四目标滤镜样式;
将所述第四目标滤镜样式发送至所述第二设备,以使所述第二设备基于所述第四目标滤镜样式执行图像拍摄操作和/或图像滤镜操作。
如上,通过获取并识别用户语音信息,以语音识别的方式感知用户的滤镜拍摄意图或滤镜选择意图,增强用户操作的灵活性,以进一步提升本申请所述方法在复杂场景(如用户双手握持第二设备的场景)下的适用性,提升用户的拍摄体验。
前述用户语音信息可以为第一设备预定义的语音关键字,举例来说,用户语音信息可以为“应用滤镜1进行拍摄”,该用户语音信息中,语音关键字为“拍摄”和“滤镜1”,根据该用户语音信息,第二设备将基于滤镜样式1执行图像拍摄操作。
在一示例中:
第一设备与第二设备通过无线网络(如WIFI或蓝牙)建立连接,用户开启第二设备的拍照功能,第二设备的摄像头当前采集的实物景象(即当前采集的原始图像)通过无线网络传输至第一设备,并存储于第一设备的内存中。
用户操控第二设备开启滤镜模式后,第二设备将第二设备支持的多个滤镜样式选项信息通过无线网络同步到第一设备,并存储于第一设备的内存中。
第一设备应用SLAM算法,对第二设备摄像头当前采集的实物景象以及多个滤镜样式选项信息进行处理后,可以如图4所示,于虚拟显示界面中显 示滤镜样式选项界面404以及未经滤镜的原始图像405。
当第一设备识别到用户移动手指到滤镜样式选项界面404的某一样式显示区域,且在该区域停留时间大于或等于1秒时,第一设备判定用户选择该样式显示区域对应的滤镜样式,此时第一设备将根据该样式显示区域对应的滤镜样式确定为第一目标滤镜样式,并将该第一目标滤镜样式发送至第二设备,以使第二设备基于第一目标滤镜样式对当前采集的原始图像进行滤镜处理,进而获得对应第一目标滤镜样式的滤镜图像。
第二设备将该滤镜图像同步给第一设备,第一设备通过SLAM技术在虚拟显示界面对该滤镜图像进行显示,此时,虚拟显示界面的显示内容可以如图4所示,包括滤镜样式选项界面404、未经滤镜的原始图像405以及原始图像经过滤镜处理后的滤镜图像406。
在原始图像和滤镜图像均显示于虚拟显示界面的情况下,用户通过比对原始图像和滤镜图像之间的图像差异,即可确定当前显示的滤镜图像对应滤镜样式的滤镜效果,待用户确定滤镜样式后,可移动手指至所确定滤镜样式对应样式显示区域并停留2秒,第一设备将会通过无线网络控制第二设备基于用户所确定第二目标滤镜样式执行图像拍摄操作。
需要说明的是,在示例中,若用户确定采用滤镜样式1执行图像拍摄操作,而虚拟显示界面当前显示的滤镜图像采用的滤镜样式为滤镜样式2时,第一设备会先将滤镜样式1确定为第一目标滤镜样式,再将滤镜样式1确定为第二目标滤镜样式,即在第二设备根据滤镜样式1执行图像拍摄操作之前,第二设备会先根据滤镜样式1执行滤镜处理操作。
在另一示例中:
第一设备与第二设备通过无线网络建立连接,用户开启拍照功能并选择滤镜模式。
第二设备基于当前采集的原始图像依次切换不同的滤镜样式,并将不同滤镜样式处理得到的不同滤镜图像通过无线网络传输给第一设备,第一设备 将不同滤镜图像存储于第一设备的内存中。
第一设备基于SLAM算法在虚拟显示界面显示前述不同滤镜图像以及原始图像,虚拟显示界面的显示内容可以如图5所示,包括编号1-9的图像,其中,编号为1的图像可理解为原始图像,编号2-9的图像可理解为不同滤镜图像。
在原始图像和不同滤镜图像均显示于虚拟显示界面的情况下,用户通过比对原始图像和滤镜图像之间的图像差异,以及不同滤镜图像之间的图像差异,即可确定当前显示的各个滤镜图像对应滤镜样式的滤镜效果,待用户选定滤镜样式后,可移动手指至所选定滤镜样式对应图像显示区域并停留1秒,第一设备将会通过无线网络控制第二设备基于用户所选定滤镜样式执行图像拍摄操作。
参见图6,图6是本申请实施例提供的另一种拍摄处理方法的流程示意图,图6所示拍摄处理方法应用于第二设备,所述第二设备与第一设备通信连接,所述第一设备可展示虚拟显示界面,如图6所示,所述拍摄处理方法包括以下步骤:
601、采集原始图像。
602、将所述原始图像发送至所述第一设备,以在所述第一设备的所述虚拟显示界面展示所述原始图像和滤镜图像。
其中,所述滤镜图像为所述原始图像经过滤镜处理后的图像。
在本申请实施例中,应用第一设备的虚拟显示界面对第二设备采集的原始图像和所述原始图像经过滤镜处理后的滤镜图像进行同时展示,使用户能通过比对原始图像在滤镜处理前后的图像差异来确定滤镜效果,优化使用滤镜拍摄后的滤镜图像的成像效果。
此外,相较于第二设备的实际显示界面在人眼内的第一成像区域来说,虚拟显示界面在人眼内的第二成像区域的面积更大,因此,经由第一设备对原始图像和滤镜图像进行同时展示,还可扩展关联第二设备的拍摄信息(如 原始图像、滤镜图像以及滤镜样式等)在人眼内的成像区域面积,令用户获得更优的拍摄体验。
可选的,所述采集原始图像之后,所述拍摄处理方法还包括:
将至少两个滤镜样式发送至所述第一设备,以使所述第一设备在所述虚拟显示界面上展示所述至少两个滤镜样式;
接收所述第一设备在所述至少两个滤镜样式中确定的第一目标滤镜样式;
根据所述第一目标滤镜样式对所述原始图像进行滤镜处理,得到所述滤镜图像。
可选的,所述将至少两个滤镜样式发送至所述第一设备之后,所述拍摄处理方法还包括:
接收所述第一设备在所述至少两个滤镜样式中确定的第二目标滤镜样式;
根据所述第二目标滤镜样式执行图像拍摄操作。
可选的,所述采集原始图像之后,所述拍摄处理方法还包括:
对所述原始图像进行滤镜处理,得到至少两个所述滤镜图像,其中,各个所述滤镜图像采用的滤镜样式不同;
将所述至少两个滤镜图像发送至所述第一设备,以使所述第一设备在所述虚拟显示界面展示所述原始图像和至少两个所述滤镜图像。
可选的,所述将所述至少两个滤镜图像发送至所述第一设备之后,所述拍摄处理方法还包括:
接收所述第一设备发送的第三目标滤镜样式,其中,所述第三目标滤镜样式为所述第一设备基于用户的第二输入在至少两个所述滤镜图像中确定的目标滤镜图像采用的滤镜样式;
根据所述第三目标滤镜样式执行图像拍摄操作。
可选的,所述采集原始图像之后,所述拍摄处理方法还包括:
接收所述第一设备识别用户语音信息得到的第四目标滤镜样式;
根据所述第四目标滤镜样式执行图像拍摄操作和/或图像滤镜操作。
本申请实施例提供的应用于第一设备的拍摄处理方法,执行主体可以为应用于第一设备的拍摄处理装置。本申请实施例中以应用于第一设备的拍摄处理装置执行应用于第一设备的拍摄处理方法为例,说明本申请实施例提供的应用于第一设备的拍摄处理装置。
如图7所示,一种拍摄处理装置700,应用于第一设备,所述第一设备可展示虚拟显示界面,所述第一设备与第二设备通信连接,所述拍摄处理装置700包括:
图像获取模块701,用于获取所述第二设备采集的原始图像;
图像显示模块702,用于在所述虚拟显示界面展示所述原始图像和滤镜图像,其中,所述滤镜图像为所述原始图像经过滤镜处理后的图像。
可选的,所述装置700还包括:
样式展示模块,用于在所述虚拟显示界面上展示至少两个滤镜样式;
第一接收模块,用于接收用户的第一输入;
第一样式确定模块,用于响应于所述第一输入,在所述至少两个滤镜样式中确定第一目标滤镜样式;
第一发送模块,用于将所述第一目标滤镜样式发送至所述第二设备,以使所述第二设备根据所述第一目标滤镜样式对所述原始图像进行滤镜处理。
可选的,所述图像显示模块702,包括:
第一显示单元,用于在所述虚拟显示界面的第一显示区域展示第一滤镜样式,在所述虚拟显示界面的第二显示区域展示第二滤镜样式;
所述第一接收模块,具体用于:
接收用户通过目标操作对象的第一输入;
所述第一样式确定模块,具体用于:
检测所述目标操作对象在所述虚拟显示界面的第一位置;
当所述第一位置在所述第一显示区域的停留时间大于或等于预设的第一阈值时,将所述第一滤镜样式确定为所述第一目标滤镜样式。
可选的,所述装置700还包括:
第二样式确定模块,用于当所述第一位置在所述第一显示区域的停留时间大于或等于预设的第二阈值时,将所述第一滤镜样式确定为第二目标滤镜样式,所述第二阈值的时长大于所述第一阈值的时长;
第二发送模块,用于将所述第二目标滤镜样式发送至所述第二设备,以使所述第二设备根据所述第二目标滤镜样式执行图像拍摄操作。
可选的,所述图像显示模块702,包括:
第二显示单元,用于在所述虚拟显示界面展示至少两个所述滤镜图像和所述原始图像,其中,各个所述滤镜图像采用的滤镜样式不同。
可选的,所述装置700还包括:
第二接收模块,用于接收用户的第二输入;
滤镜图像确定模块,用于响应于所述第二输入,在所述至少两个所述滤镜图像中确定目标滤镜图像;
第三发送模块,用于将所述目标滤镜图像采用的第三目标滤镜样式发送至所述第二设备,以使所述第二设备根据所述第三目标滤镜样式执行图像拍摄操作。
可选的,所述第二显示单元,具体用于:
在所述虚拟显示界面的第三显示区域展示第一滤镜图像,在所述虚拟显示界面的第四显示区域展示第二滤镜图像;
所述第二接收模块,具体用于:
接收用户通过目标操作对象的第二输入;
所述滤镜图像确定模块,具体用于:
检测所述目标操作对象在所述虚拟显示界面的第二位置;
当所述第二位置在所述第三显示区域的停留时间大于或等于预设的第三阈值时,将所述第一滤镜图像确定为所述目标滤镜图像。
可选的,所述装置700还包括:
语音获取模块,用于获取用户语音信息;
语音识别模块,用于对所述用户语音信息进行识别,得到第四目标滤镜样式;
第四发送模块,用于将所述第四目标滤镜样式发送至所述第二设备,以使所述第二设备基于所述第四目标滤镜样式执行图像拍摄操作和/或图像滤镜操作。
在本申请实施例中,应用第一设备的虚拟显示界面对第二设备采集的原始图像和所述原始图像经过滤镜处理后的滤镜图像进行同时展示,使用户能通过比对原始图像在滤镜处理前后的图像差异来确定滤镜效果,优化使用滤镜拍摄后的滤镜图像的成像效果。
此外,相较于终端设备的实际显示界面在人眼内的第一成像区域来说,虚拟显示界面在人眼内的第二成像区域的面积更大,因此,经由AR设备对原始图像和滤镜图像进行同时展示,还可扩展关联终端设备的拍摄信息(如原始图像、滤镜图像以及滤镜样式等)在人眼内的成像区域面积,令用户获得更优的拍摄体验。
本申请实施例中的应用于第一设备的拍摄处理装置可以是电子设备,也可以是电子设备中的部件,例如集成电路或芯片。该电子设备可以是终端,也可以为除终端之外的其他设备。示例性的,电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、移动上网装置(Mobile Internet Device,MID)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、机器人、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,还可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的应用于第一设备的拍摄处理装置可以为具有操作系统 的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的应用于第一设备的拍摄处理装置能够实现图2至图5的方法实施例实现的各个过程,为避免重复,这里不再赘述。
本申请实施例提供的应用于第二设备的拍摄处理方法,执行主体可以为应用于第二设备的拍摄处理装置。本申请实施例中以应用于第二设备的拍摄处理装置执行应用于第二设备的拍摄处理方法为例,说明本申请实施例提供的应用于第二设备的拍摄处理装置。
如图8所示,一种拍摄处理装置800,应用于第二设备,所述第二设备与第一设备通信连接,所述第一设备可展示虚拟显示界面,所述拍摄处理装置800包括:
图像采集模块801,用于采集原始图像;
图像发送模块802,将所述原始图像发送至所述第一设备,以在所述第一设备的所述虚拟显示界面展示所述原始图像和滤镜图像,其中,所述滤镜图像为所述原始图像经过滤镜处理后的图像。
可选的,所述装置800还包括:
样式发送模块,用于将至少两个滤镜样式发送至所述第一设备,以使所述第一设备在所述虚拟显示界面上展示所述至少两个滤镜样式;
第一接收模块,用于接收所述第一设备在所述至少两个滤镜样式中确定的第一目标滤镜样式;
第一滤镜处理模块,用于根据所述第一目标滤镜样式对所述原始图像进行滤镜处理,得到所述滤镜图像。
可选的,所述装置800还包括:
第二接收模块,用于接收所述第一设备在所述至少两个滤镜样式中确定的第二目标滤镜样式;
第一拍摄模块,用于根据所述第二目标滤镜样式执行图像拍摄操作。
可选的,所述装置800还包括:
第二滤镜处理模块,用于对所述原始图像进行滤镜处理,得到至少两个所述滤镜图像,其中,各个所述滤镜图像采用的滤镜样式不同;
图像传输模块,用于将所述至少两个滤镜图像发送至所述第一设备,以使所述第一设备在所述虚拟显示界面展示所述原始图像和至少两个所述滤镜图像。
可选的,所述装置800还包括:
第三接收模块,用于接收所述第一设备发送的第三目标滤镜样式,其中,所述第三目标滤镜样式为所述第一设备基于用户的第二输入在至少两个所述滤镜图像中确定的目标滤镜图像采用的滤镜样式;
第二拍摄模块,用于根据所述第三目标滤镜样式执行图像拍摄操作。
可选的,所述装置800还包括:
第四接收模块,接收所述第一设备识别用户语音信息得到的第四目标滤镜样式;
图像处理模块,用于根据所述第四目标滤镜样式执行图像拍摄操作和/或图像滤镜操作。
在本申请实施例中,应用第一设备的虚拟显示界面对第二设备采集的原始图像和所述原始图像经过滤镜处理后的滤镜图像进行同时展示,使用户能通过比对原始图像在滤镜处理前后的图像差异来确定滤镜效果,优化使用滤镜拍摄后的滤镜图像的成像效果。
此外,相较于终端设备的实际显示界面在人眼内的第一成像区域来说,虚拟显示界面在人眼内的第二成像区域的面积更大,因此,经由AR设备对原始图像和滤镜图像进行同时展示,还可扩展关联终端设备的拍摄信息(如原始图像、滤镜图像以及滤镜样式等)在人眼内的成像区域面积,令用户获得更优的拍摄体验。
本申请实施例中的应用于第二设备的拍摄处理装置可以是电子设备,也 可以是电子设备中的部件,例如集成电路或芯片。该电子设备可以是终端,也可以为除终端之外的其他设备。示例性的,电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、移动上网装置(Mobile Internet Device,MID)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、机器人、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,还可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的应用于第二设备的拍摄处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的应用于第二设备的拍摄处理装置800能够实现图6的方法实施例实现的各个过程,为避免重复,这里不再赘述。
可选地,如图9所示,本申请实施例还提供一种电子设备900,包括处理器901和存储器902,存储器902上存储有可在所述处理器901上运行的程序或指令,该程序或指令被处理器901执行时实现上述应用于第一设备的拍摄处理方法实施例的各个步骤,或者,实现上述应用于第二设备的拍摄处理方式实施例的各个步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图10为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备100包括但不限于:射频单元101、网络模块102、音频输出单元103、输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、以及处理器110等部件。
本领域技术人员可以理解,电子设备100还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图10中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
应理解的是,本申请实施例中,输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元106可包括显示面板1061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板1061。用户输入单元107包括触控面板1071以及其他输入设备1072中的至少一种。触控面板1071,也称为触摸屏。触控面板1071可包括触摸检测装置和触摸控制器两个部分。其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
存储器109可用于存储软件程序以及各种数据。存储器109可主要包括存储程序或指令的第一存储区和存储数据的第二存储区,其中,第一存储区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器109可以包括易失性存储器或非易失性存储器,或者,存储器x09可以包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM, DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器109包括但不限于这些和任意其它适合类型的存储器。
处理器110可包括一个或多个处理单元;可选的,处理器110集成应用处理器和调制解调处理器,其中,应用处理器主要处理涉及操作系统、用户界面和应用程序等的操作,调制解调处理器主要处理无线通信信号,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述应用于第一设备的方法实施例的各个过程,或者,实现上述应用于第二设备的方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述应用于第一设备的方法实施例的各个过程,或者,实现上述应用于第二设备的方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现上述应用于第一设备的方法实施例的各个过程,或者,实现上述应用于第二设备的方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (12)

  1. 一种拍摄处理方法,应用于第一设备,所述第一设备可展示虚拟显示界面,所述第一设备与第二设备通信连接;所述拍摄处理方法包括:
    获取所述第二设备采集的原始图像;
    在所述虚拟显示界面展示所述原始图像和滤镜图像,其中,所述滤镜图像为所述原始图像经过滤镜处理后的图像。
  2. 根据权利要求1所述的方法,其中,在所述虚拟显示界面展示所述原始图像和滤镜图像之前,所述拍摄处理方法还包括:
    在所述虚拟显示界面上展示至少两个滤镜样式;
    接收用户的第一输入;
    响应于所述第一输入,在所述至少两个滤镜样式中确定第一目标滤镜样式;
    将所述第一目标滤镜样式发送至所述第二设备,以使所述第二设备根据所述第一目标滤镜样式对所述原始图像进行滤镜处理。
  3. 根据权利要求2所述的方法,其中,所述在所述虚拟显示界面上展示至少两个滤镜样式,包括:
    在所述虚拟显示界面的第一显示区域展示第一滤镜样式,在所述虚拟显示界面的第二显示区域展示第二滤镜样式;
    所述接收用户的第一输入,包括:
    接收用户通过目标操作对象的第一输入;
    所述在所述至少两个滤镜样式中确定第一目标滤镜样式,包括:
    检测所述目标操作对象在所述虚拟显示界面的第一位置;
    当所述第一位置在所述第一显示区域的停留时间大于或等于预设的第一阈值时,将所述第一滤镜样式确定为所述第一目标滤镜样式。
  4. 根据权利要求3所述的方法,其中,所述检测所述目标操作对象在所述虚拟显示界面的第一位置之后,所述拍摄处理方法还包括:
    当所述第一位置在所述第一显示区域的停留时间大于或等于预设的第二阈值时,将所述第一滤镜样式确定为第二目标滤镜样式,所述第二阈值的时长大于所述第一阈值的时长;
    将所述第二目标滤镜样式发送至所述第二设备,以使所述第二设备根据所述第二目标滤镜样式执行图像拍摄操作。
  5. 根据权利要求1所述的方法,其中,在所述虚拟显示界面展示所述原始图像和滤镜图像,包括:
    在所述虚拟显示界面展示至少两个所述滤镜图像和所述原始图像,其中,各个所述滤镜图像采用的滤镜样式不同。
  6. 根据权利要求5所述的方法,其中,所述在所述虚拟显示界面展示至少两个所述滤镜图像和所述原始图像之后,所述拍摄处理方法还包括:
    接收用户的第二输入;
    响应于所述第二输入,在所述至少两个所述滤镜图像中确定目标滤镜图像;
    将所述目标滤镜图像采用的第三目标滤镜样式发送至所述第二设备,以使所述第二设备根据所述第三目标滤镜样式执行图像拍摄操作。
  7. 根据权利要求6所述的方法,其中,所述在所述虚拟显示界面展示至少两个所述滤镜图像,包括:
    在所述虚拟显示界面的第三显示区域展示第一滤镜图像,在所述虚拟显示界面的第四显示区域展示第二滤镜图像;
    所述接收用户的第二输入,包括:
    接收用户通过目标操作对象的第二输入;
    所述在所述至少两个所述滤镜图像中确定目标滤镜图像,包括:
    检测所述目标操作对象在所述虚拟显示界面的第二位置;
    当所述第二位置在所述第三显示区域的停留时间大于或等于预设的第三阈值时,将所述第一滤镜图像确定为所述目标滤镜图像。
  8. 一种拍摄处理方法,应用于第二设备,其中,所述第二设备与第一设备通信连接,所述第一设备可展示虚拟显示界面;所述拍摄处理方法包括:
    采集原始图像;
    将所述原始图像发送至所述第一设备,以在所述第一设备的所述虚拟显示界面展示所述原始图像和滤镜图像,其中,所述滤镜图像为所述原始图像经过滤镜处理后的图像。
  9. 一种拍摄处理装置,应用于第一设备,其中,所述第一设备可展示虚拟显示界面,所述第一设备与第二设备通信连接,所述装置包括:
    图像获取模块,用于获取所述第二设备采集的原始图像;
    图像显示模块,用于在所述虚拟显示界面展示所述原始图像和滤镜图像,其中,所述滤镜图像为所述原始图像经过滤镜处理后的图像。
  10. 一种拍摄处理装置,应用于第二设备,其中,所述第二设备与第一设备通信连接,所述第一设备可展示虚拟显示界面;所述装置包括:
    图像采集模块,用于采集原始图像;
    图像发送模块,将所述原始图像发送至所述第一设备,以在所述第一设备的所述虚拟显示界面展示所述原始图像和滤镜图像,其中,所述滤镜图像为所述原始图像经过滤镜处理后的图像。
  11. 一种电子设备,包括处理器和存储器,其中,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至7任一项所述的拍摄处理方法的步骤,或者,实现如权利要求8所述的拍摄处理方法的步骤。
  12. 一种可读存储介质,其中,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至7任一项所述的拍摄处理方法的步骤,或者,实现如权利要求8所述的拍摄处理方法的步骤。
PCT/CN2023/139163 2022-12-21 2023-12-15 拍摄处理方法和电子设备 WO2024131669A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211650154.6A CN116033282A (zh) 2022-12-21 2022-12-21 拍摄处理方法和电子设备
CN202211650154.6 2022-12-21

Publications (1)

Publication Number Publication Date
WO2024131669A1 true WO2024131669A1 (zh) 2024-06-27

Family

ID=86075231

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/139163 WO2024131669A1 (zh) 2022-12-21 2023-12-15 拍摄处理方法和电子设备

Country Status (2)

Country Link
CN (1) CN116033282A (zh)
WO (1) WO2024131669A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116033282A (zh) * 2022-12-21 2023-04-28 维沃移动通信有限公司 拍摄处理方法和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993711A (zh) * 2019-03-25 2019-07-09 维沃移动通信有限公司 一种图像处理方法及终端设备
CN113079316A (zh) * 2021-03-26 2021-07-06 维沃移动通信有限公司 图像处理方法、图像处理装置及电子设备
CN113194255A (zh) * 2021-04-29 2021-07-30 南京维沃软件技术有限公司 拍摄方法、装置和电子设备
CN114302009A (zh) * 2021-12-06 2022-04-08 维沃移动通信有限公司 视频处理方法、装置、电子设备及介质
CN116033282A (zh) * 2022-12-21 2023-04-28 维沃移动通信有限公司 拍摄处理方法和电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993711A (zh) * 2019-03-25 2019-07-09 维沃移动通信有限公司 一种图像处理方法及终端设备
CN113079316A (zh) * 2021-03-26 2021-07-06 维沃移动通信有限公司 图像处理方法、图像处理装置及电子设备
CN113194255A (zh) * 2021-04-29 2021-07-30 南京维沃软件技术有限公司 拍摄方法、装置和电子设备
CN114302009A (zh) * 2021-12-06 2022-04-08 维沃移动通信有限公司 视频处理方法、装置、电子设备及介质
CN116033282A (zh) * 2022-12-21 2023-04-28 维沃移动通信有限公司 拍摄处理方法和电子设备

Also Published As

Publication number Publication date
CN116033282A (zh) 2023-04-28

Similar Documents

Publication Publication Date Title
KR102266674B1 (ko) 촬영 방법 및 단말
WO2022100712A1 (zh) 真实环境画面中虚拟道具的显示方法、系统及存储介质
CN109032358B (zh) 基于手势识别的ar交互虚拟模型的控制方法及装置
CN111970456B (zh) 拍摄控制方法、装置、设备及存储介质
EP4047902A1 (en) Remote assistance method, electronic device, and system
EP2887648B1 (en) Method of performing previewing and electronic device for implementing the same
WO2024131669A1 (zh) 拍摄处理方法和电子设备
CN107637063B (zh) 用于基于用户的手势控制功能的方法和拍摄装置
JP7543562B2 (ja) 撮像処理方法、装置、電子機器及び可読記憶媒体
US20180176459A1 (en) Method and device for changing focal point of camera
CN112261218B (zh) 视频控制方法、视频控制装置、电子设备和可读存储介质
CN113840070B (zh) 拍摄方法、装置、电子设备及介质
CN105094539B (zh) 参考信息显示方法和装置
WO2024131821A1 (zh) 拍照方法、装置及电子设备
CN110086998B (zh) 一种拍摄方法及终端
CN109104633B (zh) 视频截图方法、装置、存储介质及移动终端
CN112954209B (zh) 拍照方法、装置、电子设备及介质
CN111818382B (zh) 一种录屏方法、装置及电子设备
CN114500852A (zh) 拍摄方法、拍摄装置、电子设备和可读存储介质
CN116235501A (zh) 基于眼睛注视的媒体显示设备控制
CN115291784B (zh) 功能控制方法、装置、设备和存储介质
EP4387253A1 (en) Video recording method and apparatus, and storage medium
EP4380175A1 (en) Video recording method and apparatus, and storage medium
WO2024061134A1 (zh) 拍摄方法、装置、电子设备及介质
CN116156305A (zh) 拍摄方法、装置、电子设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23905833

Country of ref document: EP

Kind code of ref document: A1