CN116703742A - Method for identifying blurred image and electronic equipment - Google Patents

Method for identifying blurred image and electronic equipment Download PDF

Info

Publication number
CN116703742A
CN116703742A CN202211378141.8A CN202211378141A CN116703742A CN 116703742 A CN116703742 A CN 116703742A CN 202211378141 A CN202211378141 A CN 202211378141A CN 116703742 A CN116703742 A CN 116703742A
Authority
CN
China
Prior art keywords
image
electronic device
imaging
camera
displacement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211378141.8A
Other languages
Chinese (zh)
Other versions
CN116703742B (en
Inventor
刘小伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211378141.8A priority Critical patent/CN116703742B/en
Publication of CN116703742A publication Critical patent/CN116703742A/en
Application granted granted Critical
Publication of CN116703742B publication Critical patent/CN116703742B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a method and electronic equipment for identifying a blurred image, and relates to the field of image identification. The camera of the electronic device collects images according to a certain frame rate. For any frame of image, combining the depth of the image, the linear motion and rotation conditions of the electronic equipment when the image is acquired and the parameters of the camera to acquire the pixel displacement of the image; whether the image is blurred or not is identified according to the pixel displacement of the image. The blurring of the image due to the camera movement can be quickly and accurately identified.

Description

Method for identifying blurred image and electronic equipment
Technical Field
The present application relates to the field of image recognition, and in particular, to a method and an electronic device for recognizing a blurred image.
Background
With the rapid development of photographing technology, images formed by photographing are widely used in various applications. However, for various reasons, the image formed by photographing is sometimes not clear, which affects the use. There are various reasons for blurring the image, and how to identify the blurred image and further reject the blurred image is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides a method and electronic equipment for identifying a blurred image, which can quickly identify the blurred image and further intercept the blurred image.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical scheme:
in a first aspect, a method for identifying a blurred image is provided, and the method is applied to an electronic device, wherein the electronic device comprises a camera, and the method comprises the following steps: the camera acquires images according to a certain frame rate, and for any frame of image (first image), depth information of the first image and an imaging time period of the first image are acquired; acquiring linear motion information and rotation information of the electronic equipment in a first image imaging time period; acquiring linear pixel displacement of the first image in an imaging plane according to linear motion information of the electronic device, depth information of the first image and parameters of the camera in the imaging time period of the first image; acquiring the rotation pixel displacement of the first image according to the rotation information of the electronic equipment and the parameters of the camera in the first image imaging time period; judging according to linear pixel displacement of the first image in an imaging plane, rotational pixel displacement of the first image and the like, and determining the first image as a blurred image if at least one of the following is satisfied;
the judging condition comprises at least one of the following: the linear pixel displacement of the first image in the imaging plane is larger than a preset first threshold value; the weighted sum of the linear pixel displacement of the first image in the imaging plane and the rotational pixel displacement of the first image is larger than a preset second threshold; the average depth of the pixels in the first image (acquired from the depth information of the first image) is smaller than a preset third threshold.
In the method, a camera of the electronic device acquires images at a certain frame rate. For any frame of image, combining the depth of the image, the linear motion and rotation conditions of the electronic equipment when the image is acquired and the parameters of the camera to acquire the pixel displacement of the image; whether the image is blurred or not is identified according to the pixel displacement of the image. The image blurring caused by the camera movement can be rapidly and accurately identified without complex analysis of the image.
With reference to the first aspect, in one possible implementation manner, an exposure start time and an exposure time of the first image are acquired; the imaging period of the first image can thus be determined from the exposure start time and the exposure time.
With reference to the first aspect, in one possible implementation manner, acquiring depth information of the first image includes: and acquiring depth information according to the depth map or the 3D feature point map of the first image.
With reference to the first aspect, in one possible implementation manner, obtaining a linear pixel displacement of the first image in the imaging plane according to the linear motion information, the depth information, and the parameters of the camera includes: acquiring linear displacement of the electronic equipment in an imaging plane in an imaging time period according to the linear motion information; and acquiring the linear pixel displacement of the first image in the imaging plane according to the linear displacement, the depth information and the parameters of the camera.
That is, the linear motion of the electronic device (the linear motion of the camera) may cause the pixel movement of the image captured by the camera on the straight line. The linear motion of the electronic equipment can be converted into the pixel displacement of the image acquired by the camera on the straight line.
In one implementation, the electronic device includes an accelerometer, and the linear motion information includes an acceleration of a linear motion of the electronic device, and the acceleration of the linear motion of the electronic device in the imaging period may be obtained through the accelerometer, that is, the linear motion information of the electronic device in the imaging period is obtained.
With reference to the first aspect, in a possible implementation manner, obtaining, according to the linear motion information, a linear displacement of the electronic device in an imaging plane during an imaging period includes: calculating a linear displacement of the electronic device in the imaging plane over the imaging period according to the following formula:
wherein delta p Representing a linear displacement of the electronic device within the imaging plane over a first image imaging period; v l Representing the linear movement speed of the first image at the exposure start time; t represents an imaging period of the first image; a represents acceleration of the electronic device in linear motion in the imaging plane during the imaging period of the first image.
With reference to the first aspect, in one possible implementation manner, obtaining the linear pixel displacement of the first image in the imaging plane according to the linear displacement, the depth information, and the parameter of the camera (the focal length of the camera) includes: calculating the linear pixel displacement of the first image in the imaging plane according to the following formula:
wherein delta pixel-l Representing a linear pixel displacement of the first image in the imaging plane; f represents the focal length of the camera; d (D) a Representing the average depth of the pixels in the first image.
The average depth of the pixels in the first image may be an average value of depth values of all pixels in the first image, or an average value of depth values of pixels corresponding to the target object in the first image. The target object may be an object in the first image that occupies a larger proportion of the frame, or may be an object in the first image that has more distinct features.
With reference to the first aspect, in one possible implementation manner, acquiring the rotation pixel displacement of the first image according to the rotation information and the parameters of the camera includes: acquiring rotation displacement of the electronic equipment in an imaging time period according to the rotation information; and acquiring the rotation pixel displacement of the first image according to the rotation displacement and the parameters of the camera.
That is, rotation of the electronic device (rotation of the camera) will cause rotation of the image captured by the camera, which in turn will cause pixel movement of the image in a straight line. The rotation of the electronic equipment can be converted into the pixel displacement of the image acquired by the camera on a straight line.
In one implementation, the electronic device includes a gyroscope, and the rotation information includes an angular velocity of rotation of the electronic device, and the angular velocity of rotation of the electronic device in the first image imaging period may be obtained through the gyroscope, that is, rotation information of the electronic device in the first image imaging period is obtained.
With reference to the first aspect, in one possible implementation manner, acquiring a rotational displacement of the electronic device in an imaging period according to the rotational information includes: calculating a rotational displacement of the electronic device over an imaging period according to the following formula:
δ θ =ω*t
wherein delta θ Representing a rotational displacement of the electronic device within a first image imaging period; ω represents the rotational angular velocity of the electronic device within the first image-forming period, and t represents the first image-forming period.
With reference to the first aspect, in one possible implementation manner, acquiring the rotational pixel displacement of the first image according to the rotational displacement and a parameter of the camera (a focal length of the camera) includes: the rotational pixel displacement of the first image is calculated according to the following formula:
Wherein delta pixel-r Representing a rotational pixel displacement of the first image; delta θ Representing a rotational displacement of the electronic device within a first image imaging period; d, d θ The electronic equipment corresponding to the pixel point of the first image is moved by the rotating angle; width represents the width of the first image; f represents the focal length of the camera.
With reference to the first aspect, in one possible implementation manner, after determining that the first image is a blurred image, sending the first image to the second electronic device (such as a cloud server) is stopped.
After the electronic equipment collects the image, if the image is determined not to be a blurred image, the image is transmitted to a cloud server; if the image is determined to be a blurred image, the image is not transmitted to the cloud server, e.g., the image may be discarded. In this way, the cloud server receives clear images, and the clear images can be used for accurately calculating position information in space positioning.
In a second aspect, an electronic device is provided, which has the functionality to implement the method of the first aspect. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a third aspect, an electronic device is provided, comprising: a processor and a memory; the memory is configured to store computer-executable instructions that, when executed by the electronic device, cause the electronic device to perform the method of identifying blurred images as described in any of the first aspects above.
In a fourth aspect, there is provided an electronic device comprising: a processor; the processor is configured to couple to the memory and to execute the method of identifying blurred images according to any of the first aspects described above in accordance with the instructions after reading the instructions in the memory.
In a fifth aspect, there is provided a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of identifying blurred images of any of the first aspects above.
In a sixth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of identifying blurred images as claimed in any one of the first aspects above.
In a seventh aspect, there is provided an apparatus (e.g. the apparatus may be a system-on-a-chip) comprising a processor for supporting an electronic device to implement the functions referred to in the first aspect above. In one possible design, the apparatus further includes a memory for storing program instructions and data necessary for the electronic device. When the device is a chip system, the device can be formed by a chip, and can also comprise the chip and other discrete devices.
The technical effects of any one of the design manners of the second aspect to the seventh aspect may be referred to the technical effects of the different design manners of the first aspect, and will not be repeated here.
Drawings
Fig. 1 is a schematic view of a scene to which a method for identifying blurred images according to an embodiment of the present application is applicable;
FIG. 2 is an exemplary diagram of an unblurred image;
FIG. 3 is an exemplary diagram of a blurred image;
fig. 4 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
FIG. 5 is a flowchart of a method for identifying blurred images according to an embodiment of the present application;
fig. 6 is a schematic diagram of an architecture of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic diagram of structural components of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic diagram of a chip system according to an embodiment of the present application.
Detailed Description
In the description of embodiments of the present application, the terminology used in the embodiments below is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in the following embodiments of the present application, "at least one", "one or more" means one or more than two (including two). The term "and/or" is used to describe an association relationship of associated objects, meaning that there may be three relationships; for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise. The term "coupled" includes both direct and indirect connections, unless stated otherwise. The terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
With the rapid development of photographing technology, images formed by photographing are widely used in various applications. Sometimes, the sharpness of the image greatly affects the implementation effect of the application.
For example, with the rapid development of computer vision and cell phone photographing technologies, many augmented reality (augmented reality, AR) applications are increasingly used. The AR technology is a technology for fusing virtual information with the real world, and widely uses various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like, and applies virtual information generated by electronic equipment to the real world after simulation of the virtual information such as characters, images, three-dimensional models, music, videos and the like, wherein the virtual information and the real world are mutually complemented, so that the 'enhancement' of the real world is realized. In AR applications, spatial localization techniques play a critical role. Spatial localization techniques are implemented in combination with cloud-side (cloud server) visual localization services (visual positioning service, VPS) techniques and end-side (electronic devices, such as cell phones) instant localization and map building (simultaneous localization and mapping, SLAM) techniques. Taking the terminal side electronic equipment as a mobile phone as an example, referring to fig. 1, an AR application on the mobile phone collects an image of a current position through a camera, and uploads the image to a cloud server, the cloud server calculates the position by adopting a VPS technology, and returns a calculation result to the mobile phone, and the mobile phone fuses with the SLAM technology to perform real-time tracking and positioning. If the image acquired by the mobile phone camera is a blurred image, the position resolving precision is greatly influenced, and even resolving failure is caused. By way of example, the image shown in fig. 2 is a satisfactory (clearer) image in terms of sharpness, with the boundaries of the object being clearer. The image shown in fig. 3 is an image with unsatisfactory sharpness (blurred), and the boundary of the object is unclear and has smear. If the mobile phone uploads the image shown in fig. 3 to the cloud server for spatial positioning, positioning errors or positioning failures are likely to be caused.
The embodiment of the application provides a method for identifying a blurred image, which can be used for effectively identifying the blurred image by combining image shooting information (such as exposure time) and information representing the motion condition of electronic equipment; thus, the blurred image can be intercepted, and the application of the blurred image is avoided. For example, in the above example, according to the motion situation of the mobile phone when the camera collects the image, the AR application of the mobile phone recognizes that the image collected by the camera is a blurred image, and discards the blurred image, and does not upload the blurred image to the cloud server for positioning and resolving.
The method for identifying the blurred image provided by the embodiment of the application can be applied to electronic equipment with a camera. The electronic devices may include cell phones, tablet computers, notebook computers, personal computers (personal computer, PC), ultra-mobile personal computers (ultra-mobile personal computer, UMPC), handheld computer, netbook, smart home device (e.g., smart tv, smart screen, large screen, smart speaker, smart air conditioner, etc.), personal digital assistant (personal digital assistant, PDA), wearable device (e.g., smart television, smart screen, smart speaker, smart air conditioner, etc.)Smart watches, smart bracelets, etc.), vehicle-mounted devices, augmented reality devices, virtual reality devices, etc., to which embodiments of the present application do not limit in any way. The electronic device can run an operating system and install application programs. Alternatively, the operating system on which the electronic device runs may be System (S)>The system is used for the control of the system,a system, etc.
For example, please refer to fig. 4, which illustrates a schematic structure of an electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication (near field communication, NFC), infrared (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
In the embodiment of the present application, the electronic device 100 may communicate with other electronic devices through the mobile communication module 150 or the wireless communication module 160. For example, after the electronic device 100 performs the recognition of the blurred image, the image that is not blurred is sent to the cloud server through the mobile communication module 150 or the wireless communication module 160.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The sensor module 180 includes an inertial measurement unit (inertial measurement unit, IMU) module, or the like. The IMU module may include gyroscopes, accelerometers, and the like. Gyroscopes and accelerometers may be used to gather motion information of the electronic device 100. In some embodiments, the angular velocity of the electronic device 100 about three axes may be determined by a gyroscope. Accelerometers may be used to capture the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1. The cameras 193 may include, for example, wide-angle cameras (may also be referred to as wide-angle camera modules or wide-angle lenses), telephoto cameras (may also be referred to as telephoto camera modules or telephoto lenses), ultra-wide-angle cameras (may also be referred to as ultra-wide-angle camera modules or ultra-wide-angle lenses), black-and-white cameras (may also be referred to as black-and-white camera modules or black-and-white lenses), macro cameras (may also be referred to as macro camera modules or macro lenses), computer Vision (CV) cameras (may also be referred to as CV camera modules or CV lenses), time of flight (ToF) cameras, and the like.
In some scenarios, the electronic device 100 performs image acquisition in response to user operation. For example, upon receiving an operation of pressing a shutter by a user (for example, pressing a photographing button, or inputting a gesture instruction, or inputting a voice instruction, or the like), the electronic apparatus 100 captures an image by the camera 193 in response to the operation. In some scenarios, an application installed on electronic device 100 initiates camera 193 for image acquisition. For example, the AR application acquires images around the electronic device 100 in real time through a camera, and uploads the acquired images to a cloud server for real-time positioning.
The camera 193 leaves an image of the object on the photosensitive element and finally images the object in a time interval from opening to closing of the shutter when capturing one frame of image. This time interval from shutter opening to shutter closing is called exposure time. The exposure time for the camera 193 to capture one frame image may be preconfigured. In some scenarios, camera 193 may also automatically adjust the exposure time based on imaging requirements.
In the exposure time corresponding to one frame of image, if the motion amplitude of the electronic equipment exceeds a certain limit, the image acquired by the camera may be blurred. The method for identifying the blurred image can acquire the motion information of the electronic equipment in real time, identify the blurred image by combining the motion information, quickly identify the blurred image and further intercept the blurred image.
Referring to fig. 5, a method for identifying blurred images according to an embodiment of the present application may include:
s401, acquiring depth information of a first image.
The electronic device may acquire images at a frame rate. The first image is any frame of image acquired by the camera. That is, when the electronic device collects a frame of image, the electronic device can identify the frame of image and determine whether to blur the image.
In some embodiments, the first image acquired by the electronic device includes a depth map. A depth map is an image or image channel, where each pixel value in the depth map represents the distance of an object from the camera plane of the camera. Illustratively, the first image includes an RGB image and a depth map, the RGB image and the depth map being registered, there being a one-to-one correspondence between pixels of the RGB image and pixels of the depth map. The RGB image and the depth map record information of one frame image from different dimensions, and together form one frame image.
Because of the variability in the manner in which distance information is recorded, the depth map can be expressed as: range image, depth map, dense-depth map, depth image, range picture, 3D image, surface height map, dense-range image, depth aspect image, 2.5D image, 3Ddata, xyz maps, surface profiles, etc.
In other embodiments, the first image acquired by the electronic device includes a 3D feature point map.
The depth information of each pixel point of the first image can be obtained through the depth map or the 3D feature point map, and the depth information of the pixel point, namely the depth value of the pixel point (the pixel value of the pixel point in the depth map), represents the distance between the corresponding position of the pixel point and the camera plane of the camera.
S402, acquiring exposure starting time and exposure time of a first image, namely acquiring a first image imaging time period.
The exposure starting time is the time when the first image starts to be acquired, namely the time when the shutter is opened when the frame of image is acquired; the exposure time is the time interval from opening to closing of the shutter. The exposure start time and exposure time of the first image determine the period of time during which the first image is imaged.
When each frame of image is collected, the camera marks shooting information of the frame of image, including exposure starting time, exposure time and the like of the image. The captured information of the image may be saved in a memory of the electronic device together with the image.
S403, acquiring motion information (including linear motion information and rotation information) of the electronic device in the first image forming period.
The motion information may include linear motion information and rotational information. Wherein the linear motion information is used for representing linear motion of the electronic device in the imaging plane, for example, the linear motion information can comprise acceleration; the rotation information is used to characterize the rotation of the electronic device, for example, the rotation information may include angular velocity.
In one implementation, the acceleration of the electronic device in the linear motion in the imaging plane can be obtained through an accelerometer of the IMU module, and the instantaneous speed at any moment can be calculated through integrating the acceleration; and acquiring the rotating angular speed of the electronic equipment through the gyroscope of the IMU module.
The motion information of the electronic equipment in the first image imaging time period reflects the motion condition of the electronic equipment in the first image imaging process. If the electronic device moves more in magnitude during the first image imaging period, the first image may be blurred.
S404, according to the linear motion information of the electronic equipment in the first image imaging time period, acquiring the linear displacement of the electronic equipment in the imaging plane in the first image imaging time period. And acquiring the rotation displacement of the electronic equipment in the first image imaging time period according to the rotation information of the electronic equipment in the first image imaging time period.
In one implementation, the linear displacement of the electronic device within the imaging plane over the first image imaging period may be calculated by equation 1.
Wherein delta p Representing a linear displacement of the electronic device within the imaging plane over a first image imaging period; v l Representing the linear movement speed of the first image at the exposure start time; t represents an imaging period of the first image; a is the acceleration of the linear motion of the electronic device in the imaging plane during the first image imaging period.
In one implementation, the rotational displacement of the electronic device over the first image-forming period may be calculated by equation 2.
δ θ =ω×t equation 2
Wherein delta θ Representing a rotational displacement of the electronic device within a first image imaging period; ω is the rotational angular velocity of the electronic device during the first image imaging period; t represents an imaging period of the first image.
S405, acquiring linear pixel displacement of the first image in the imaging plane according to the depth information of the first image, the parameters of the camera and the linear displacement of the electronic device in the imaging plane in the imaging time period of the first image. And acquiring the rotation pixel displacement of the first image according to the parameters of the camera and the rotation displacement of the electronic equipment in the first image imaging time period.
It will be appreciated that the camera moves with the electronic device. The linear motion of the electronic device brings about a pixel shift (referred to as a linear pixel shift in the present application) of the image captured by the camera in the imaging plane, and the rotation of the electronic device brings about a rotation of the image captured by the camera, which in turn brings about a pixel shift (referred to as a rotational pixel shift in the present application) of the image in a straight line.
In some embodiments, the average depth of pixels in the first image may be obtained from the depth information of the first image. In one implementation, an average depth value for all pixel points in the first image may be calculated; i.e. an average value of the depth values of the individual pixels in the first image is calculated. In another implementation, only the average depth value of the pixel points corresponding to the target object in the first image may be calculated. Thus, the calculation amount can be reduced and the efficiency can be improved. The target object may be an object having a larger proportion of the screen occupied in the first image, or may be an object having a more distinct feature in the first image, for example. By way of example, referring to fig. 2, the target object may be a chair or a plant on a table or the like in the image.
Further, the linear pixel displacement of the first image in the imaging plane may be obtained according to the average depth of the pixels in the first image, the camera parameter (such as the focal length of the camera) and the linear displacement of the electronic device in the imaging plane in the imaging period of the first image. In one implementation, the linear pixel displacement of the first image within the imaging plane may be calculated according to equation 3.
/>
Wherein delta pixel-l Representing a linear pixel displacement of the first image in the imaging plane; f is the focal length of the camera; d (D) a Is the average depth of the pixels in the first image.
The rotational pixel displacement of the first image may be obtained based on a parameter of the camera (such as a focal length of the camera) and a rotational displacement of the electronic device during the imaging period of the first image. In one implementation, the rotational pixel displacement of the first image may be calculated according to equations 4 and 5.
Wherein delta pixel-r Representing a rotational pixel displacement of the first image; delta θ Representing a rotational displacement of the electronic device within a first image imaging period; d, d θ The image is represented to move by an angle of rotation of the electronic equipment corresponding to the pixel point; width represents the width of the first image (i.e., the resolution of the long side of the first image); f is the focal length of the camera.
S406, if the preset condition is met, determining that the first image is a blurred image.
In some embodiments, whether the first image is blurred may be determined from a linear pixel displacement of the first image within the imaging plane, a rotational pixel displacement of the first image, or depth information of the first image.
The preset condition may include at least one of:
linear pixel displacement (delta) of the first image in the imaging plane pixel-l ) Is larger than a preset first threshold value;
linear pixel displacement (delta) of the first image in the imaging plane pixel-l ) And a rotational pixel shift (delta) of the first image pixel-r ) Is greater than a preset second threshold, wherein the linear pixel displacement (delta pixel-l ) And a rotational pixel shift (delta) from the first image pixel-r ) The weighting values corresponding to the weighting values can be adjusted according to actual conditions;
the average depth (D a ) Less than a preset third threshold.
Linear pixel displacement (delta) of the first image in the imaging plane pixel-l ) And when the image is larger than a preset first threshold value, the electronic equipment moves faster on the image plane, and image blurring can be caused. Linear pixel displacement (delta) of the first image in the imaging plane pixel-l ) And a rotational pixel shift (delta) of the first image pixel-r ) If the weighted sum of (2) is greater than a preset second threshold, it means that the electronic device translates and rotates faster, which can cause blurring of the image. The average depth (D a ) And the distance between the camera and the object is too short, and the slight shake of the camera can cause image blurring.
Illustratively, the preset second threshold is 10. Delta corresponding to the image shown in FIG. 2 pixel-l And delta pixel-r And a weighted sum of 0.3, less than a second threshold, determines that the image is not a blurred image. Delta corresponding to the image shown in FIG. 3 pixel-l And delta pixel-r Is 22.41, greater than a second threshold, and determines that the image is a blurred image.
In some scenes, such as the spatial positioning scene shown in fig. 1, after the electronic device collects the image, if it is determined that the image is not a blurred image, the image is transmitted to the cloud server; if the image is determined to be a blurred image, the image is not transmitted to the cloud server, e.g., the image may be discarded. In this way, the clear images received by the cloud server can be used for spatial positioning and accurately calculating the position information.
The method for identifying the blurred image provided by the embodiment of the application combines the operation information of the electronic equipment to identify the blurred image. If it is determined that the movement of the electronic device results in a pixel displacement of the image exceeding a preset threshold, it is determined that the image is a blurred image. The fuzzy image can be accurately identified without complex analysis of the image itself.
Taking the spatial positioning scenario shown in fig. 1 as an example, fig. 6 shows a specific example of a method for identifying a blurred image provided by the embodiment of the present application performed by each module in a mobile phone in the scenario.
In the embodiment of the application, the electronic device is an electronic device capable of running an operating system and installing an application program. Alternatively, the operating system on which the electronic device runs may beSystem (S)>System (S)>A system, etc. For example, the software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate via interfaces. In some embodiments, an Android system may include an application layer, an application framework layer, an Zhuoyun rows (Android run) and libraries, a hardware abstraction layer (hardware abstraction layer, HAL), and a kernel layer. It should be noted that, in the embodiment of the present application, an Android system is illustrated, and in other operating systems (such as a hong mo system, an IOS system, etc.), the scheme of the present application can be implemented as long as the functions implemented by the respective functional modules are similar to those implemented by the embodiment of the present application.
The application layer may include a series of application packages, among other things.
As shown in fig. 6, the application package may include AR applications, gallery, calendar, talk, map, navigation, wireless local area network (wireless local area networks, WLAN), bluetooth, music, video, short message, settings, etc. applications. Of course, the application layer may also include other application packages, such as a payment application, a shopping application, a banking application, or a chat application, and the application is not limited thereto.
In the embodiment of the application, an application with an image acquisition function, such as a camera application and an AR application, can be installed in the application program layer. The camera application has the functions of photographing and image capturing. Of course, when other applications (such as an AR application) need to use the image capturing function, the camera application may also be invoked to implement the image capturing function.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. For example, an activity manager, a window manager, a content provider, a view system, a resource manager, a notification manager, a Camera Service (Camera Service), a sensor Service, an identification module, etc., which the embodiments of the present application do not limit in any way.
Among other things, camera Service may interact with cameras HAL (Camera HAL) in a Hardware Abstraction Layer (HAL) during operation. The sensor service may interact with a sensor Hardware Abstraction Layer (HAL) in the process of operation. The identification module may obtain parameters and data of the camera (e.g., acquired images) from the camera service, etc., and may obtain parameters and data of the sensor (e.g., accelerometer, gyroscope, etc.) from the sensor service (e.g., acceleration, rotation angle, etc.).
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
OpenGL ES is used to implement three-dimensional graphics drawing, image rendering, compositing, and layer processing, among others.
SGL is the drawing engine for 2D drawing.
Android Runtime (Android run) includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system. The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android. The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The HAL layer is used for packaging a Linux kernel driver, providing an interface upwards and shielding implementation details of bottom hardware.
The HAL layer may include Wi-Fi HAL, display HAL, camera HAL (Camera HAL), sensor HAL, etc.
The Camera HAL is a core software framework of a Camera, and is responsible for interacting with a hardware device (such as a Camera) for realizing a shooting function in an electronic device. The sensor HAL is responsible for interacting with the sensors of the hardware layer. The layer HAL conceals the implementation details of the related hardware equipment on one hand and can provide an interface for calling the related hardware equipment for the Android system on the other hand.
The kernel layer is a layer between hardware and software. The kernel layer may contain display drivers, wi-Fi drivers, camera drivers, sensor drivers, etc.
The hardware layer includes a display, wi-Fi, camera, accelerometer, gyroscope, etc.
For example, the AR application at the application layer initiates spatial localization, and the AR application issues a command to the camera through the camera service, instructing the camera to start capturing images. The camera collects surrounding images according to a certain frame rate, and the surrounding images are transmitted to the camera service through the camera drive and the camera HAL. The camera service may save the images captured by the camera in the memory of the handset. The camera service may also obtain camera parameters (such as the focal length of the camera) from the camera HAL. On the other hand, the sensor (including accelerometer, gyroscope, etc.) of the mobile phone collects the motion information of the mobile phone in real time and transmits the motion information to the sensor service through the sensor drive and the sensor HAL. The sensor service may save the motion information collected by the sensor in the memory of the handset.
The identification module can acquire images acquired by the camera and parameters of the camera from the camera service, and can acquire motion information of the mobile phone from the sensor service. In this way, the recognition module can acquire depth information, exposure starting time and exposure time according to the image, can acquire linear displacement and rotational displacement of the mobile phone in an imaging plane according to motion information of the mobile phone, and can acquire linear pixel displacement and rotational pixel displacement of a frame of image in the imaging plane by combining camera parameters. The identification module can judge whether a frame image is a blurred image or not according to linear pixel displacement, rotational pixel displacement, depth information and the like of the frame image in an imaging plane. In one implementation, if the recognition module determines that a frame image is a blurred image, the frame image is not sent to the AR application, and thus the AR application does not send the frame image to the cloud server.
It should be noted that, in other implementations, the functions of the identification module may be performed by other modules. For example, the above-described functions of the identification module may be performed by the camera service; as another example, the above-described functions of the identification module may be performed by the AR application. The embodiment of the present application is not limited thereto.
It may be understood that, in order to implement the above-mentioned functions, the electronic device provided in the embodiment of the present application includes corresponding hardware structures and/or software modules for executing each function. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The embodiment of the application can divide the functional modules of the electronic device according to the method example, for example, each functional module can be divided corresponding to each function, or two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
In one example, please refer to fig. 7, which shows a possible structural schematic diagram of the electronic device involved in the above embodiment. The electronic device 700 includes: a processing unit 710, a storage unit 720, an image acquisition unit 730, and a data acquisition unit 740.
The processing unit 710 is configured to control and manage the operations of the electronic device 700. The storage unit 720 is used for storing program codes and data of the electronic device 700, and the processing unit 710 calls the program codes stored in the storage unit 720 to perform the steps in the above method embodiments. For example, the blurred image is recognized. The image acquisition unit 730 is used for image acquisition. The data acquisition unit 740 is used for acquiring motion information of the electronic device 700.
Of course, the unit modules in the electronic device 700 include, but are not limited to, the processing unit 710, the storage unit 720, the image capturing unit 730, and the data capturing unit 740. For example, the electronic device 700 may further include a display unit, a communication unit, a power supply unit, and the like. The display unit is used to display a user interface of the electronic device 700, for example, to display an image. The communication unit is used for the electronic device 700 to communicate with other electronic devices; for example, a clear image is transmitted to a cloud server. The power supply unit is used to power the electronic device 700.
The processing unit 710 may be a processor or controller, such as a central processing unit (central processing unit, CPU), a graphics processor (graphics processing unit, GPU), a digital signal processor (digital signal processor, DSP), an application-specific integrated circuit (application-specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. The storage unit 720 may be a memory. The image capturing unit 730 may be a camera or the like. The data collection unit 740 may be a sensor or the like. The display unit may be a display screen or the like.
For example, the processing unit 710 may be a processor (such as the processor 110 shown in fig. 4), the storage unit 720 may be a memory (such as the internal memory 121 shown in fig. 4), the image capturing unit 730 may be a camera (such as the camera 193 shown in fig. 4), the data capturing unit 740 may be a sensor (such as the IMU module shown in fig. 4), the communication unit may include a mobile communication unit (such as the mobile communication module 150 shown in fig. 4) and a wireless communication unit (such as the wireless communication module 160 shown in fig. 4), and the display unit may be a display screen (such as the display screen 194 shown in fig. 4). The electronic device 700 provided by the embodiment of the present application may be the electronic device 100 shown in fig. 4. The processor, the memory, the camera, the sensor, the display screen and the like can be connected together, for example, through a bus. The processor invokes the memory-stored program code to perform the steps in the method embodiments above.
Embodiments of the present application also provide a system-on-a-chip (SoC) including at least one processor 801 and at least one interface circuit 802, as shown in fig. 8. The processor 801 and the interface circuit 802 may be interconnected by wires. For example, interface circuit 802 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, the interface circuit 802 may be used to send signals to other devices (e.g., the processor 801 or a touch screen of an electronic apparatus). The interface circuit 802 may, for example, read instructions stored in a memory and send the instructions to the processor 801. The instructions, when executed by the processor 801, may cause the electronic device to perform the various steps of the embodiments described above. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
Embodiments of the present application also provide a computer-readable storage medium including computer instructions which, when executed on an electronic device as described above, cause the electronic device to perform the functions or steps of the method embodiments described above.
Embodiments of the present application also provide a computer program product which, when run on a computer, causes the computer to perform the functions or steps of the method embodiments described above.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method of identifying blurred images, characterized by being applied to an electronic device comprising a camera, the method comprising:
acquiring depth information of a first image, wherein the first image is an image acquired by the camera;
acquiring an imaging time period of the first image;
acquiring linear motion information and rotation information of the electronic equipment in the imaging time period;
acquiring linear pixel displacement of the first image in an imaging plane according to the linear motion information, the depth information and the parameters of the camera;
acquiring the rotation pixel displacement of the first image according to the rotation information and the parameters of the camera;
if the first image is determined to meet the preset condition, determining that the first image is a blurred image;
wherein the preset conditions include at least one of:
The linear pixel displacement of the first image in the imaging plane is greater than a first threshold;
a weighted sum of the linear pixel displacement of the first image in the imaging plane and the rotational pixel displacement of the first image is greater than a second threshold;
the average depth of the pixels in the first image is smaller than a third threshold value, and the average depth of the pixels in the first image is acquired according to the depth information of the first image.
2. The method of claim 1, wherein the imaging period of time for acquiring the first image comprises:
acquiring exposure starting time and exposure time of the first image;
and determining the imaging time period of the first image according to the exposure starting time and the exposure time.
3. The method according to claim 1 or 2, wherein the acquiring depth information of the first image comprises:
and acquiring the depth information according to the depth map or the 3D feature point map of the first image.
4. A method according to any one of claims 1-3, wherein said obtaining a rectilinear pixel displacement of the first image in the imaging plane based on the rectilinear motion information, the depth information, and parameters of the camera comprises:
Acquiring linear displacement of the electronic equipment in an imaging plane in the imaging time period according to the linear motion information;
and acquiring the linear pixel displacement of the first image in an imaging plane according to the linear displacement, the depth information and the parameters of the camera.
5. The method of claim 4, wherein the electronic device comprises an accelerometer, the linear motion information comprises an acceleration of the linear motion of the electronic device,
the obtaining the linear motion information of the electronic device in the imaging time period includes:
and acquiring the acceleration of the linear motion of the electronic equipment in the imaging time period through the accelerometer.
6. The method of claim 5, wherein the obtaining the linear displacement of the electronic device in the imaging plane over the imaging period of time from the linear motion information comprises:
calculating a linear displacement of the electronic device within the imaging plane over the imaging period according to the formula:
wherein delta p Representing an electronic deviceA linear displacement within an imaging plane over the imaging period; v l Representing a linear movement speed of the first image at an exposure start time; t represents the imaging period; a represents the acceleration of the electronic device in linear motion in the imaging plane in the imaging period.
7. The method of claim 5, wherein the obtaining the linear pixel displacement of the first image in the imaging plane based on the linear displacement, the depth information, and parameters of the camera comprises:
calculating the linear pixel displacement of the first image in the imaging plane according to the following formula:
wherein delta pixel-l Representing a linear pixel displacement of the first image in the imaging plane; f represents the focal length of the camera; d (D) a Representing the average depth of the pixels in the first image.
8. The method of claim 7, wherein the average depth of the pixels in the first image is an average value of depth values of all pixels in the first image or an average value of depth values of pixels corresponding to the target object in the first image.
9. A method according to any one of claims 1-3, wherein said obtaining rotational pixel displacement of the first image based on the rotational information and parameters of the camera comprises:
acquiring rotation displacement of the electronic equipment in the imaging time period according to the rotation information;
and acquiring the rotation pixel displacement of the first image according to the rotation displacement and the parameters of the camera.
10. The method of claim 9, wherein the electronic device comprises a gyroscope, the rotation information comprises an angular velocity at which the electronic device rotates,
the acquiring rotation information of the electronic device in the imaging time period includes:
and acquiring the angular speed of rotation of the electronic equipment in the imaging time period through the gyroscope.
11. The method of claim 10, wherein the acquiring rotational displacement of the electronic device within the imaging period based on the rotational information comprises:
calculating a rotational displacement of the electronic device over the imaging period according to the formula:
δ θ =ω*t
wherein delta θ Representing a rotational displacement of the electronic device within the imaging period; ω represents the rotational angular velocity of the electronic device within the imaging period, and t represents the imaging period.
12. The method of claim 10, wherein the acquiring the rotational pixel displacement of the first image based on the rotational displacement and the camera parameters comprises:
calculating a rotational pixel displacement of the first image according to the following formula:
wherein delta pixel-r Representing a rotational pixel displacement of the first image; delta θ Representing rotational displacement of the electronic device within the imaging period; d, d θ Representation ofThe first image moves by an angle of rotation of the electronic equipment corresponding to the pixel point; width represents the width of the first image; f represents the focal length of the camera.
13. The method of any of claims 1-12, wherein after determining that the first image is a blurred image, the method further comprises: and stopping sending the first image to the second electronic equipment.
14. An electronic device, comprising: a processor and a memory; the memory has stored therein one or more computer programs, the one or more computer programs comprising instructions, which when executed by the electronic device, cause the electronic device to perform the method of any of claims 1-13.
15. A computer-readable storage medium comprising computer instructions; the computer instructions, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1-13.
CN202211378141.8A 2022-11-04 2022-11-04 Method for identifying blurred image and electronic equipment Active CN116703742B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211378141.8A CN116703742B (en) 2022-11-04 2022-11-04 Method for identifying blurred image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211378141.8A CN116703742B (en) 2022-11-04 2022-11-04 Method for identifying blurred image and electronic equipment

Publications (2)

Publication Number Publication Date
CN116703742A true CN116703742A (en) 2023-09-05
CN116703742B CN116703742B (en) 2024-05-17

Family

ID=87839853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211378141.8A Active CN116703742B (en) 2022-11-04 2022-11-04 Method for identifying blurred image and electronic equipment

Country Status (1)

Country Link
CN (1) CN116703742B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0258037A (en) * 1988-08-23 1990-02-27 Canon Inc Camera
CN1809175A (en) * 2005-01-17 2006-07-26 华为技术有限公司 Video quality evaluation method
CN1888913A (en) * 2006-07-27 2007-01-03 上海交通大学 Rotating speed measuring method based on rotary blurred image
US20130194486A1 (en) * 2012-01-31 2013-08-01 Microsoft Corporation Image blur detection
CN103856711A (en) * 2012-12-05 2014-06-11 联咏科技股份有限公司 Rolling shutter correcting method and image processing device
US20150084991A1 (en) * 2013-09-25 2015-03-26 Lucasfilm Entertainment Company, Ltd. Post-render motion blur
CN104853082A (en) * 2014-11-25 2015-08-19 广东欧珀移动通信有限公司 Method and apparatus of shooting panorama picture
US20160330374A1 (en) * 2014-01-07 2016-11-10 Dacuda Ag Adaptive camera control for reducing motion blur during real-time image capture
RU2647645C1 (en) * 2016-12-29 2018-03-16 Общество с ограниченной ответственностью "СИАМС" Method of eliminating seams when creating panoramic images from video stream of frames in real-time
US20180182099A1 (en) * 2016-12-27 2018-06-28 Definiens Ag Identifying and Excluding Blurred Areas of Images of Stained Tissue To Improve Cancer Scoring
CN111275625A (en) * 2018-12-04 2020-06-12 杭州海康机器人技术有限公司 Image deblurring method and device and electronic equipment
US20200213482A1 (en) * 2018-12-28 2020-07-02 Canon Kabushiki Kaisha Information processing apparatus, imaging apparatus, and information processing method each of which issues a notification of blur of an object, and control method for the imaging apparatus
CN112307985A (en) * 2020-11-02 2021-02-02 安徽鸿程光电有限公司 Image identification method, system, electronic equipment and storage medium
CN114047358A (en) * 2021-11-15 2022-02-15 中国计量科学研究院 Monocular vision-based line angle vibration calibration method
CN114419073A (en) * 2022-03-09 2022-04-29 荣耀终端有限公司 Motion blur generation method and device and terminal equipment
CN114549346A (en) * 2022-01-27 2022-05-27 阿丘机器人科技(苏州)有限公司 Blurred image recognition method, device, equipment and storage medium
CN114723603A (en) * 2021-01-05 2022-07-08 北京小米移动软件有限公司 Image processing method, image processing apparatus, and storage medium

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0258037A (en) * 1988-08-23 1990-02-27 Canon Inc Camera
CN1809175A (en) * 2005-01-17 2006-07-26 华为技术有限公司 Video quality evaluation method
CN1888913A (en) * 2006-07-27 2007-01-03 上海交通大学 Rotating speed measuring method based on rotary blurred image
US20130194486A1 (en) * 2012-01-31 2013-08-01 Microsoft Corporation Image blur detection
CN103856711A (en) * 2012-12-05 2014-06-11 联咏科技股份有限公司 Rolling shutter correcting method and image processing device
US20150084991A1 (en) * 2013-09-25 2015-03-26 Lucasfilm Entertainment Company, Ltd. Post-render motion blur
US20160330374A1 (en) * 2014-01-07 2016-11-10 Dacuda Ag Adaptive camera control for reducing motion blur during real-time image capture
CN104853082A (en) * 2014-11-25 2015-08-19 广东欧珀移动通信有限公司 Method and apparatus of shooting panorama picture
US20180182099A1 (en) * 2016-12-27 2018-06-28 Definiens Ag Identifying and Excluding Blurred Areas of Images of Stained Tissue To Improve Cancer Scoring
RU2647645C1 (en) * 2016-12-29 2018-03-16 Общество с ограниченной ответственностью "СИАМС" Method of eliminating seams when creating panoramic images from video stream of frames in real-time
CN111275625A (en) * 2018-12-04 2020-06-12 杭州海康机器人技术有限公司 Image deblurring method and device and electronic equipment
US20200213482A1 (en) * 2018-12-28 2020-07-02 Canon Kabushiki Kaisha Information processing apparatus, imaging apparatus, and information processing method each of which issues a notification of blur of an object, and control method for the imaging apparatus
CN111385471A (en) * 2018-12-28 2020-07-07 佳能株式会社 Information processing apparatus and method, image pickup apparatus and control method thereof, and storage medium
CN112307985A (en) * 2020-11-02 2021-02-02 安徽鸿程光电有限公司 Image identification method, system, electronic equipment and storage medium
CN114723603A (en) * 2021-01-05 2022-07-08 北京小米移动软件有限公司 Image processing method, image processing apparatus, and storage medium
CN114047358A (en) * 2021-11-15 2022-02-15 中国计量科学研究院 Monocular vision-based line angle vibration calibration method
CN114549346A (en) * 2022-01-27 2022-05-27 阿丘机器人科技(苏州)有限公司 Blurred image recognition method, device, equipment and storage medium
CN114419073A (en) * 2022-03-09 2022-04-29 荣耀终端有限公司 Motion blur generation method and device and terminal equipment

Also Published As

Publication number Publication date
CN116703742B (en) 2024-05-17

Similar Documents

Publication Publication Date Title
US11831977B2 (en) Photographing and processing method and electronic device
US11816775B2 (en) Image rendering method and apparatus, and electronic device
WO2020073959A1 (en) Image capturing method, and electronic device
US20230276014A1 (en) Photographing method and electronic device
CN113411528B (en) Video frame rate control method, terminal and storage medium
CN114119758B (en) Method for acquiring vehicle pose, electronic device and computer-readable storage medium
CN112351156B (en) Lens switching method and device
US20230018004A1 (en) Photographing method and apparatus
TWI818211B (en) Eye positioning device and method and 3D display device and method
US20220262035A1 (en) Method, apparatus, and system for determining pose
CN113660408B (en) Anti-shake method and device for video shooting
US20240153209A1 (en) Object Reconstruction Method and Related Device
CN110248037B (en) Identity document scanning method and device
US20210409588A1 (en) Method for Shooting Long-Exposure Image and Electronic Device
US20240056683A1 (en) Focusing Method and Electronic Device
US20230005277A1 (en) Pose determining method and related device
US20240155309A1 (en) Device Searching Method and Electronic Device
CN110138999B (en) Certificate scanning method and device for mobile terminal
CN116723257A (en) Image display method and electronic equipment
US20240013432A1 (en) Image processing method and related device
CN115150542B (en) Video anti-shake method and related equipment
CN116703742B (en) Method for identifying blurred image and electronic equipment
CN113790732B (en) Method and device for generating position information
CN115686182B (en) Processing method of augmented reality video and electronic equipment
CN113573045B (en) Stray light detection method and stray light detection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant