WO2020015538A1 - 图像数据处理方法及移动终端 - Google Patents

图像数据处理方法及移动终端 Download PDF

Info

Publication number
WO2020015538A1
WO2020015538A1 PCT/CN2019/094696 CN2019094696W WO2020015538A1 WO 2020015538 A1 WO2020015538 A1 WO 2020015538A1 CN 2019094696 W CN2019094696 W CN 2019094696W WO 2020015538 A1 WO2020015538 A1 WO 2020015538A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
component pixels
face
pixels
rgb
Prior art date
Application number
PCT/CN2019/094696
Other languages
English (en)
French (fr)
Inventor
肖强
林华鑫
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2020015538A1 publication Critical patent/WO2020015538A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Definitions

  • the present disclosure relates to the field of communication technologies, and in particular, to an image data processing method and a mobile terminal.
  • Face recognition as a non-contact recognition method, only needs to obtain people's face images, and can realize the recognition function by comparing and analyzing facial features. It is a more convenient and accurate recognition technology and is widely used in mobile terminals. in. With the maturity of Red, Green, Blue, and Infrared Radiation (RGB-IR) sensor technology, RGB-IR sensors are increasingly used in mobile terminals for image processing. However, in low-light environments such as at night or indoors, the effect of the image collected by the RGB-IR sensor is poor, making the matching success rate of face recognition lower.
  • RGB-IR Red, Green, Blue, and Infrared Radiation
  • an embodiment of the present disclosure provides an image data processing method applied to a mobile terminal, where the mobile terminal includes a red-green-blue infrared RGB-IR image processing device; the method includes:
  • IR Infrared Radiation
  • an embodiment of the present disclosure further provides a mobile terminal, where the mobile terminal includes an RGB-IR image processing device; the mobile terminal further includes:
  • a first acquisition module configured to acquire the ambient light intensity
  • a startup module configured to start an infrared light source when the ambient light intensity is lower than a preset light intensity
  • a second acquisition module configured to acquire a face IR image collected by the RGB-IR image processing device
  • a comparison module configured to compare the face IR image with a pre-stored face image, and when the face IR image matches the pre-stored face image, face recognition succeeds; when the face When the IR image does not match the pre-stored face image, the face recognition fails.
  • an embodiment of the present disclosure further provides a mobile terminal including a processor, a memory, and a computer program stored on the memory and executable on the processor.
  • the computer program is replaced by the processor.
  • an embodiment of the present disclosure further provides a computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements steps of the image data processing method as described in the first aspect .
  • FIG. 1 is a flowchart of an image data processing method according to an embodiment of the present disclosure
  • FIG. 2 is another flowchart of an image data processing method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of an arrangement manner of a photosensitive pixel array on an RGB-IR image processing device in a mobile terminal to which the method in FIG. 2 is applied;
  • FIG. 4 is another schematic diagram of an arrangement of photosensitive pixel arrays on an RGB-IR image processing device in a mobile terminal to which the method in FIG. 2 is applied;
  • FIG. 5 is a structural diagram of a mobile terminal provided by an embodiment of the present disclosure.
  • FIG. 6 is another structural diagram of a mobile terminal according to an embodiment of the present disclosure.
  • FIG. 7 is another structural diagram of a mobile terminal according to an embodiment of the present disclosure.
  • FIG. 8 is still another structural diagram of a mobile terminal according to an embodiment of the present disclosure.
  • FIG. 1 is a flowchart of an image data processing method according to an embodiment of the present disclosure.
  • the image data processing method is applied to a mobile terminal including an RGB-IR image processing device. As shown in FIG. 1, the image data processing method includes the following steps:
  • Step 101 Obtain an ambient light intensity.
  • the ambient light intensity may be the ambient light intensity located indoors, such as the ambient light intensity of indoor lights, or the ambient light intensity outdoor, such as the ambient light intensity of natural light.
  • Step 102 when the ambient light intensity is lower than a preset light intensity, start an infrared light source.
  • the RGB-IR image processing device includes an RGB-IR sensor and a dual-pass filter.
  • the RGB-IR sensor includes a photosensitive pixel array, and the photosensitive pixel array includes an infrared (Infrared Radiation, IR) component. Pixels, red (R, R) component pixels, green (G, G) component pixels, and blue (B, B) component pixels. The above-mentioned four component pixels are distributed in an array in different arrangements.
  • the dual-pass filter may be disposed above the photosensitive pixel array, and the dual-pass filter includes an array of filter units, and the filter unit includes a color filter region and an infrared filter region.
  • the color filter area is used to obtain color information of pixels of a composite image, which can block the passage of infrared rays to generate a color image, such as a Red Green Blue (RGB) image, in an environment with light intensity.
  • the infrared filter region is used to pass specific infrared rays.
  • the infrared filter region is an IR filter region and is used to pass infrared rays with a wavelength of about 850 nm or about 940 nm to generate an IR image.
  • the setting of the dual-pass filter also enables the mobile terminal to simultaneously obtain the RGB image and the IR image through an RGB-IR sensor, saving the space of the mobile terminal.
  • the mobile terminal is provided with an infrared light source
  • the infrared light source may be an infrared light-emitting diode (LED) light, an infrared laser light, or the like.
  • the infrared light source may be disposed close to the RGB-IR image processing device, and both the infrared light source and the RGB-IR image processing device are connected to a processor of the mobile terminal.
  • the RGB-IR image processing device when the RGB-IR image processing device is started, it may be detected whether the ambient light intensity is lower than a preset light intensity through a light sensor provided on the mobile terminal; At light intensity, the infrared light source is activated. For example, when the user is at night or rainy weather with low light intensity, if the RGB-IR image processing device is activated for face recognition, the mobile terminal will control the infrared light source to be activated at this time.
  • the embodiments of the present disclosure may also be limited to that, if it is detected that the current processing mode of the RGB-IR image processing device is a photographing mode, the infrared light source is not activated. That is, the infrared light source is activated only when the current processing mode of the RGB-IR image processing device is a face recognition mode and the current ambient light intensity is lower than a preset light intensity.
  • Step 103 Obtain a face IR image collected by the RGB-IR image processing device.
  • the current processing mode of the RGB-IR image processing device is a face recognition mode
  • the infrared light source is turned on
  • the infrared light is reflected by the human face and received by the RGB-IR sensor.
  • the component pixels synthesize a face IR image.
  • the RGB-IR sensor includes an array of photosensitive pixels, which enables the RGB-IR image processing device to simultaneously obtain R component pixels, G component pixels, and B component pixels for synthesizing a color image, and The IR component pixels used to synthesize the infrared image.
  • the mobile terminal may choose not to perform synthesis processing on the obtained R component pixels, G component pixels, and B component pixels, and only perform synthesis processing on the obtained IR component pixels. To synthesize a face IR image and complete the face recognition operation. Alternatively, the mobile terminal may also acquire the IR component pixels in a targeted manner according to the arrangement manner of the photosensitive pixel array.
  • the pixels in odd rows are arranged in order of R component pixels and G component pixels
  • the pixels in even rows are arranged in order of IR component pixels and B component pixels.
  • the mobile terminal may choose to only The pixels of the even-numbered rows are acquired, and the acquired B-component pixels are not processed, and only the acquired IR-component pixels are subjected to synthesis processing to synthesize the IR image of the face to complete the face recognition operation.
  • Step 104 Compare the face IR image with a pre-stored face image. When the face IR image matches the pre-stored face image, face recognition succeeds; when the face IR image matches When the pre-stored face images do not match, the face recognition fails.
  • the pre-stored face image is a face image of a plurality of users stored in the mobile terminal in advance; when a face IR image collected by the RGB-IR image processing device is obtained, a preset algorithm is used to The acquired face IR image is compared with the pre-stored face image, and if the face IR image matches the pre-stored face image, it is determined that the face recognition match is successful to complete the movement Operation of applying face recognition functions such as screen unlocking and payment of the terminal; if the IR image of the face does not match the pre-stored face image, the face recognition fails, and the operation of the face recognition function on the mobile terminal cannot be completed , Such as screen unlocking, payment, etc., to ensure the security of mobile terminals.
  • the image data processing method may further include:
  • the acquired R component pixels, G component pixels, and B component pixels are combined into an RGB image.
  • the current ambient light intensity is greater than a preset light intensity
  • the ambient light is received by the RGB-IR sensor after being reflected by the photographed object, and sequentially read Take two adjacent rows of pixel data in the RGB-IR sensor photosensitive pixel array, and then obtain R component pixels, G component pixels, and B component pixels to synthesize an RGB image.
  • the RGB image can be displayed on a display interface of a mobile terminal, stored in a stored file of the mobile terminal according to a user operation, or cleared.
  • the current ambient light intensity is greater than a preset light intensity
  • the current processing mode of the RGB-IR image processing device is a face recognition mode
  • the ambient light is reflected by the human face and received by the RGB-IR sensor to obtain
  • the R component pixels, G component pixels, and B component pixels are used to synthesize an RGB face image; the face RGB image is compared with a pre-stored face image to complete face recognition.
  • the ambient light intensity is greater than the preset light intensity, there is no need to activate the infrared light source, and the face recognition is completed by obtaining an RGB image of the human face; when the ambient light intensity is less than the preset light intensity, the infrared light source is activated To complete the face recognition by obtaining the IR image of the face. In this way, it is ensured that the face recognition function of the mobile terminal can complete the face recognition regardless of the light intensity environment, and the normal use of the face recognition function is guaranteed.
  • an infrared light source is activated to obtain a face IR image collected by an RGB-IR image processing device, and the face IR image and a pre-stored Face images are compared to complete face recognition.
  • the mobile terminal obtains the IR image of the face by activating the infrared light source in the environment with weak light intensity to complete the face recognition; it compensates for the low matching success rate of visible light face recognition in the environment with weak light intensity.
  • the defect ensures that the face recognition function of the mobile terminal is not affected by the intensity of ambient light, and ensures the normal use of the face recognition function of the mobile terminal.
  • FIG. 2 is another flowchart of an image data processing method according to an embodiment of the present disclosure.
  • the image data processing method is applied to a mobile terminal including an RGB-IR image processing device. As shown in FIG. 2, the image data processing method includes the following steps:
  • Step 201 Obtain an ambient light intensity.
  • This step may be implemented with reference to step 101 in the embodiment shown in FIG. 1. To avoid repetition, this is not described in the embodiment of the present disclosure.
  • Step 202 When the intensity of the ambient light is lower than a preset light intensity, start an infrared light source.
  • This step may be implemented with reference to step 102 in the embodiment shown in FIG. 1. To avoid repetition, this is not described in the embodiment of the present disclosure.
  • Step 203 When it is determined that the arrangement mode of the pixel array is the first arrangement mode, use the first data processing method to obtain the IR component pixels collected by the RGB-IR image processing device, and synthesize the IR image of the face.
  • the first arrangement is that the pixels in odd rows include R component pixels and G component pixels, and the pixels in even rows include IR component pixels and B component pixels.
  • the arrangement of the photosensitive pixel array may also be: the odd-numbered rows of pixels are cyclically arranged in the order of R-component pixels, G-component pixels, and G-component pixels, and the even-numbered rows of pixels are Sequentially cycle.
  • the arrangement of the photosensitive pixel array is as follows: the pixels in odd rows are cyclically arranged in the order of R component pixels and G component pixels, and the pixels in the even rows are cyclically arranged in order of IR component pixels and B component pixels.
  • the step 203 may include:
  • the even-numbered rows of pixel data are sequentially read, and the IR component pixels in the even-numbered rows of pixel data are obtained;
  • the acquired IR component pixels are combined into a human face IR image.
  • the IR component pixels are set only in the even rows.
  • the pixel data of the even rows can be read in order, and the IR component pixels in them can be extracted, and The extracted IR component pixels are synthesized into a face IR image.
  • the B-component pixels of the even-numbered rows they may be cleared or buffered in the mobile terminal. In this way, there is no need to acquire and process the pixel data of the odd rows, which saves the data storage space of the mobile terminal and also improves the image data processing capability of the mobile terminal.
  • step 203 may also be: when determining that the arrangement mode of the pixel array is the first arrangement mode, sequentially reading an odd-numbered row of pixel data and an even-numbered row of pixel data adjacent thereto and extracting The IR component pixels are used to synthesize the extracted IR component pixels into a face IR image.
  • the read R component pixels, G component pixels, and B component pixels are cleared, or are buffered in the mobile terminal, and no synthesis processing is performed.
  • Step 204 When it is determined that the arrangement mode of the pixel array is the second arrangement mode, use the second data processing method to obtain the IR component pixels collected by the RGB-IR image processing device, and synthesize the IR image of the face.
  • the second arrangement is: the pixels in odd rows include R component pixels, G component pixels, and IR component pixels, and the pixels in even rows include IR component pixels, B component pixels, and R component pixels; the above-mentioned component pixels may be arranged in different ways Way to arrange.
  • the arrangement of the photosensitive pixel array may be that the odd-numbered rows of pixels are cyclically arranged in the order of R-component pixels, G-component pixels, IR-component pixels, and B-component pixels, and the even-numbered pixels are arranged according to IR-component pixels and B-component pixels.
  • R component pixels, G component pixels are arranged in order.
  • the arrangement of the photosensitive pixel array is: the pixels in odd rows are cyclically arranged in the order of R component pixels, G component pixels, IR component pixels, and G component pixels, and the pixels in even rows are arranged according to IR component pixels , B component pixels, R component pixels, and B component pixels are arranged in order.
  • the step 204 may include:
  • an odd-numbered line of pixel data and an even-numbered line of pixel data adjacent to the odd-lined pixels are sequentially read, and the odd-lined pixel data and all Describe the IR component pixels in the even-numbered rows of pixel data;
  • the acquired IR component pixels are combined into a human face IR image.
  • the IR component pixels are included in both the odd and even rows, and the two rows of data can be read sequentially from top to bottom, that is, an odd number.
  • Row pixel data and even-numbered row pixel data adjacent to the odd-numbered row pixels extract IR pixel components therein, and combine the acquired IR pixel components into a face IR image.
  • the read R component pixels, G component pixels and B component pixels are cleared, or they are buffered in the mobile terminal without synthesis processing.
  • the step 204 may further include: sequentially reading the pixel data of the odd columns, and obtaining the IR component pixels in the pixel data of the odd columns, The acquired IR component pixels are combined into a human face IR image.
  • the R component pixels of the obtained odd-numbered columns may be cleared or buffered in the mobile terminal. In this way, there is no need to acquire and process the pixel data of the even-numbered columns, which saves the data storage space of the mobile terminal and improves the image data processing capability of the mobile terminal.
  • Step 205 Compare the face IR image with a pre-stored face image. When the face IR image matches the pre-stored face image, face recognition succeeds; when the face IR image matches When the pre-stored face images do not match, the face recognition fails.
  • This step may be implemented with reference to step 104 in the embodiment shown in FIG. 1. To avoid repetition, this is not described in the embodiment of the present disclosure.
  • image data processing method may further include:
  • the acquired R component pixels, G component pixels, and B component pixels are combined into an RGB image.
  • the current ambient light intensity is greater than a preset light intensity
  • the ambient light is received by the RGB-IR sensor after being reflected by the photographed object, and sequentially read Take two adjacent rows of pixel data in the RGB-IR sensor photosensitive pixel array, and then obtain R component pixels, G component pixels, and B component pixels to synthesize an RGB image.
  • the RGB image can be displayed on a display interface of a mobile terminal, stored in a stored file of the mobile terminal according to a user operation, or cleared.
  • the current ambient light intensity is greater than a preset light intensity
  • the current processing mode of the RGB-IR image processing device is a face recognition mode
  • the ambient light is reflected by the human face and received by the RGB-IR sensor to obtain
  • the R component pixels, G component pixels, and B component pixels are used to synthesize an RGB face image; the face RGB image is compared with a pre-stored face image to complete face recognition.
  • the ambient light intensity is greater than the preset light intensity, there is no need to activate the infrared light source, and the face recognition is completed by obtaining an RGB image of the human face; when the ambient light intensity is less than the preset light intensity, the infrared light source is activated To complete the face recognition by obtaining the IR image of the face.
  • the mobile terminal can use different data processing methods to obtain the IR component pixels collected by the RGB-IR image processing device according to different arrangements of the photosensitive pixel array, to synthesize a face IR image, and complete face recognition.
  • the mobile terminal obtains the IR image of the face by activating the infrared light source in the environment with weak light intensity to complete the face recognition; ensuring that the face recognition function of the mobile terminal is not affected by the ambient light intensity, which improves the mobile Terminal face recognition has a low success rate.
  • An embodiment of the present disclosure further provides a mobile terminal, where the mobile terminal includes an RGB-IR image processing device.
  • a mobile terminal 500 provided by an embodiment of the present disclosure includes:
  • a first acquisition module 501 configured to acquire an ambient light intensity
  • a starting module 502 configured to start an infrared light source when the ambient light intensity is lower than a preset light intensity
  • a second acquisition module 503, configured to acquire a face IR image collected by the RGB-IR image processing device
  • a comparison module 504 is configured to compare the face IR image with a pre-stored face image. When the face IR image matches the pre-stored face image, face recognition succeeds; when the person When the face IR image does not match the pre-stored face image, the face recognition fails.
  • the second obtaining module 503 is further configured to:
  • the first data processing mode is adopted to obtain the IR component pixels collected by the RGB-IR image processing device, and the IR image of the face is synthesized;
  • the second data processing mode is adopted to obtain the IR component pixels collected by the RGB-IR image processing device, and the IR image of the face is synthesized;
  • the first arrangement is: the pixels in odd rows include R component pixels and G component pixels, and the pixels in even rows include IR component pixels and B component pixels;
  • pixels in odd rows include R component pixels, G component pixels, and IR component pixels
  • pixels in even rows include IR component pixels, B component pixels, and R component pixels.
  • the second obtaining module 503 includes:
  • a first acquisition sub-module 5031 configured to sequentially read even-numbered rows of pixel data and acquire IR component pixels in the even-numbered rows of pixel data when it is determined that the arrangement of the pixel array is the first arrangement;
  • the first synthesis sub-module 5032 is configured to synthesize the obtained IR component pixels into a human face IR image.
  • the second obtaining module 503 includes:
  • a second acquisition submodule 5033 configured to read an odd-numbered row of pixel data and an even-numbered row of pixel data adjacent to the odd-numbered row of pixels when determining that the arrangement of the pixel array is the second arrangement; and Obtaining the pixel data in the odd rows and the IR component pixels in the pixel data in the even rows;
  • a second synthesis sub-module 5034 is configured to synthesize the obtained IR component pixels into a human face IR image.
  • the mobile terminal 500 further includes:
  • a reading module configured to sequentially read adjacent two rows of pixel data in the pixel array, and obtain R component pixels, G component pixels, and B component pixels therein;
  • a synthesizing module is configured to synthesize the acquired R component pixels, G component pixels, and B component pixels into an RGB image.
  • the mobile terminal provided by the embodiment of the present disclosure can implement the processes implemented by the mobile terminal in the method embodiments of FIG. 1 to FIG. 4. To avoid repetition, details are not described herein again.
  • the first acquisition module 501 acquires the ambient light intensity.
  • the startup module 502 activates the infrared light source, and the second acquisition module 503 acquires the RGB-IR image processing device acquisition.
  • the face IR image, the comparison module 504 compares the face IR image with a pre-stored face image, and when the face IR image matches the pre-stored face image, the face recognition succeeds; when When the IR image of the face does not match the pre-stored face image, the face recognition fails.
  • the mobile terminal 500 can obtain the IR image of the face by activating the infrared light source in the environment with weak light intensity to complete the face recognition; making up for the low matching success rate of visible light face recognition in the environment with weak light intensity
  • the defects ensure that the face recognition function of the mobile terminal 500 is not affected by the intensity of the ambient light, and the normal use of the face recognition function of the mobile terminal 500 is guaranteed.
  • FIG. 8 is another structural diagram of a mobile terminal that implements an embodiment of the present disclosure.
  • the mobile terminal can implement various processes implemented by the mobile terminal in the method embodiments of FIGS. 1 to 4 and achieve the same technical effect.
  • the mobile terminal 800 includes, but is not limited to, a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, a processor 810, and Power supply 811 and other components.
  • a radio frequency unit 801 a radio frequency unit 801
  • the mobile terminal 800 includes, but is not limited to, a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, a processor 810, and Power supply 811 and other components.
  • the mobile terminal may include more or less components than shown in the figure, or some components may be combined, or different components. Layout.
  • the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a car terminal, a wearable device, and a pedometer.
  • the mobile terminal 800 includes an RGB-IR image processing device (not shown).
  • the processor 810 is configured to:
  • the processor 810 is further configured to:
  • the first data processing mode is adopted to obtain the IR component pixels collected by the RGB-IR image processing device, and the IR image of the face is synthesized;
  • the second data processing mode is adopted to obtain the IR component pixels collected by the RGB-IR image processing device, and the IR image of the face is synthesized.
  • the first arrangement is: the pixels in odd rows include R component pixels and G component pixels, and the pixels in even rows include IR component pixels and B component pixels;
  • pixels in odd rows include R component pixels, G component pixels, and IR component pixels
  • pixels in even rows include IR component pixels, B component pixels, and R component pixels.
  • the processor 810 is further configured to:
  • the even-numbered rows of pixel data are sequentially read, and the IR component pixels in the even-numbered rows of pixel data are obtained;
  • the acquired IR component pixels are combined into a human face IR image.
  • the processor 810 is further configured to:
  • an odd-numbered line of pixel data and an even-numbered line of pixel data adjacent to the odd-lined pixels are sequentially read, and the odd-lined pixel data and all Describe the IR component pixels in the even-numbered rows of pixel data;
  • the acquired IR component pixels are combined into a human face IR image.
  • the processor 810 is further configured to:
  • the acquired R component pixels, G component pixels, and B component pixels are combined into an RGB image.
  • the mobile terminal provided in the embodiment of the present disclosure can implement the processes implemented by the mobile terminal in the embodiment of the image data processing method. To avoid repetition, details are not described herein again.
  • the mobile terminal 800 obtains the ambient light intensity.
  • the infrared light source is activated to obtain the IR image of the face collected by the RGB-IR image processing device, and the human The face IR image is compared with a pre-stored face image to complete face recognition.
  • the mobile terminal obtains the IR image of the face by activating the infrared light source in the environment with weak light intensity to complete the face recognition; it compensates for the low matching success rate of visible light face recognition in the environment with weak light intensity.
  • the defect ensures that the face recognition function of the mobile terminal is not affected by the ambient light intensity, and ensures the normal use of the face recognition function of the mobile terminal.
  • the radio frequency unit 801 may be used to receive and send signals during the process of transmitting and receiving information or during a call. Specifically, the downlink data from the base station is received and processed by the processor 810; The uplink data is sent to the base station.
  • the radio frequency unit 801 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 801 can also communicate with a network and other devices through a wireless communication system.
  • the mobile terminal 800 provides users with wireless broadband Internet access through the network module 802, such as helping users to send and receive email, browse web pages, and access streaming media.
  • the audio output unit 803 may convert audio data received by the radio frequency unit 801 or the network module 802 or stored in the memory 809 into audio signals and output them as sound. Also, the audio output unit 803 may also provide audio output (for example, a call signal receiving sound, a message receiving sound, etc.) related to a specific function performed by the mobile terminal 800.
  • the audio output unit 803 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 804 is used to receive audio or video signals.
  • the input unit 804 may include a graphics processing unit (GPU) 8041 and a microphone 8042.
  • the graphics processor 8041 pairs images of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode. Data is processed.
  • the processed image frames may be displayed on a display unit 806.
  • the image frames processed by the graphics processor 8041 may be stored in the memory 809 (or other computer-readable storage medium) or transmitted via the radio frequency unit 801 or the network module 802.
  • the microphone 8042 can receive sound, and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be transmitted to a mobile communication base station via the radio frequency unit 801 in the case of a telephone call mode.
  • the mobile terminal 800 further includes at least one sensor 805, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 8061 according to the brightness of the ambient light.
  • the proximity sensor can close the display panel 8061 and the mobile terminal 800 when the mobile terminal 800 moves to the ear. / Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), and can detect the magnitude and direction of gravity when it is stationary, which can be used to identify mobile terminal attitudes (such as horizontal and vertical screen switching, related games , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc .; sensor 805 can also include fingerprint sensor, pressure sensor, iris sensor, molecular sensor, gyroscope, barometer, hygrometer, thermometer, Infrared sensors, etc. are not repeated here.
  • the display unit 806 is configured to display information input by the user or information provided to the user.
  • the display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the user input unit 807 may be configured to receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the mobile terminal 800.
  • the user input unit 807 includes a touch panel 8071 and other input devices 8072.
  • Touch panel 8071 also known as touch screen, can collect user's touch operations on or near it (such as the user using a finger, stylus, etc. any suitable object or accessory on touch panel 8071 or near touch panel 8071 operating).
  • the touch panel 8071 may include a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, and detects the signal caused by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into contact coordinates, and sends
  • the processor 810 receives and executes a command sent by the processor 810.
  • the touch panel 8071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 807 may also include other input devices 8072.
  • other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not repeated here.
  • the touch panel 8071 may be overlaid on the display panel 8061.
  • the touch panel 8071 detects a touch operation on or near the touch panel 8071, the touch panel 8071 transmits the touch operation to the processor 810 to determine the type of the touch event.
  • the type of event provides corresponding visual output on the display panel 8061.
  • the touch panel 8071 and the display panel 8061 are implemented as two separate components to implement the input and output functions of the mobile terminal 800, in some embodiments, the touch panel 8071 and the display panel 8061 may be used.
  • the integration implements the input and output functions of the mobile terminal 800, which is not specifically limited here.
  • the interface unit 808 is an interface for connecting an external device with the mobile terminal 800.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, and audio input / output (input / output, I / O) port, video I / O port, headphone port, etc.
  • the interface unit 808 may be used to receive an input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 800 or may be used for the mobile terminal 800 and external Transfer data between devices.
  • the memory 809 may be used to store software programs and various data.
  • the memory 809 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, at least one function required application program (such as a sound playback function, an image playback function, etc.), etc .; Data (such as audio data, phone book, etc.) created by the use of mobile phones.
  • the memory 809 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 810 is the control center of the mobile terminal 800. It connects various parts of the entire mobile terminal 800 by using various interfaces and lines, and runs or executes software programs and / or modules stored in the memory 809 and calls stored in the memory 809. To perform various functions of the mobile terminal 800 and process the data, thereby performing overall monitoring of the mobile terminal 800.
  • the processor 810 may include one or more processing units; optionally, the processor 810 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, and an application program, etc.
  • the tuning processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 810.
  • the mobile terminal 800 may further include a power source 811 (such as a battery) for supplying power to various components.
  • a power source 811 such as a battery
  • the power source 811 may be logically connected to the processor 810 through a power management system, so as to manage charging, discharging, and power consumption through the power management system Management and other functions.
  • the mobile terminal 800 includes some functional modules that are not shown, and details are not described herein again.
  • an embodiment of the present disclosure further provides a mobile terminal including a processor, a memory, and a computer program stored on the memory and executable on the processor.
  • the computer program implements the image data when the computer program is executed by the processor.
  • An embodiment of the present disclosure further provides a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer-readable storage medium is, for example, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk.
  • the disclosed apparatus and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present disclosure is essentially a part that contributes to the existing technology or a part of the technical solution may be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in various embodiments of the present disclosure.
  • the foregoing storage medium includes various media that can store program codes, such as a U disk, a mobile hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
  • the program may be stored in a computer-readable storage medium.
  • the program When executed, the processes of the embodiments of the methods described above may be included.
  • the storage medium may be a magnetic disk, an optical disk, a ROM, or a RAM.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

一种图像数据处理方法及移动终端。所述移动终端包括红绿蓝红外RGB-IR图像处理装置;所述图像数据处理方法包括:获取环境光强度(101),在所述环境光强度低于预设光强度时,启动红外光源(102);获取所述RGB-IR图像处理装置采集的人脸IR图像(103);将所述人脸IR图像与预存的人脸图像进行比对,当所述人脸IR图像与预存的人脸图像匹配时,则人脸识别成功;当所述人脸IR图像与预存的人脸图像非匹配时,则人脸识别失败(104)。

Description

图像数据处理方法及移动终端
相关申请的交叉引用
本申请主张在2018年7月16日在中国提交的中国专利申请号No.201810775766.5的优先权,其全部内容通过引用包含于此。
技术领域
本公开涉及通信技术领域,尤其涉及一种图像数据处理方法及移动终端。
背景技术
人脸识别作为一种非接触的识别方式,只需要获取人们的人脸图像,通过面部特征对比分析就可以实现识别功能,是一种更为方便、准确的识别技术,被广泛应用于移动终端中。随着红绿蓝红外(Red Green Blue Infrared Radiation,RGB-IR)传感器工艺的成熟,RGB-IR传感器被日渐应用于移动终端中以进行图像处理。但是,在夜间或室内等光线较弱的环境中时,RGB-IR传感器采集的图像效果较差,使得人脸识别的匹配成功率较低。
发明内容
第一方面,本公开实施例提供了一种图像数据处理方法,应用于移动终端,所述移动终端包括红绿蓝红外RGB-IR图像处理装置;所述方法包括:
获取环境光强度;
在所述环境光强度低于预设光强度时,启动红外光源;
获取所述RGB-IR图像处理装置采集的人脸红外(Infrared Radiation,IR)图像;
将所述人脸IR图像与预存的人脸图像进行比对,当所述人脸IR图像与预存的人脸图像匹配时,则人脸识别成功;当所述人脸IR图像与预存的人脸图像非匹配时,则人脸识别失败。
第二方面,本公开实施例还提供了一种移动终端,所述移动终端包括RGB-IR图像处理装置;所述移动终端还包括:
第一获取模块,用于获取环境光强度;
启动模块,用于在所述环境光强度低于预设光强度时,启动红外光源;
第二获取模块,用于获取所述RGB-IR图像处理装置采集的人脸IR图像;
比对模块,用于将所述人脸IR图像与预存的人脸图像进行比对,当所述人脸IR图像与预存的人脸图像匹配时,则人脸识别成功;当所述人脸IR图像与预存的人脸图像非匹配时,则人脸识别失败。
第三方面,本公开实施例还提供了一种移动终端,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如第一方面中所述的图像数据处理方法的步骤。
第四方面,本公开实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面中所述的图像数据处理方法的步骤。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对本公开实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图。
图1是本公开实施例提供的图像数据处理方法的流程图;
图2是本公开实施例提供的图像数据处理方法的另一流程图;
图3是应用图2中方法的移动终端中RGB-IR图像处理装置上感光像素阵列的排列方式示意图;
图4是应用图2中方法的移动终端中RGB-IR图像处理装置上感光像素阵列的排列方式的另一示意图;
图5是本公开实施例提供的移动终端的结构图;
图6是本公开实施例提供的移动终端的另一结构图;
图7是本公开实施例提供的移动终端的又一结构图;
图8是本公开实施例提供的移动终端的再一结构图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其它实施例,都属于本公开保护的范围。
请参见图1,图1是本公开实施例提供的图像数据处理方法的流程图,所述图像数据处理方法应用于包括RGB-IR图像处理装置的移动终端。如图1所示,所述图像数据处理方法包括以下步骤:
步骤101、获取环境光强度。
可以理解地,所述环境光强度可以是位于室内的环境光强度,如室内灯光的环境光强度,也可以是室外的环境光强度,如自然光的环境光强度。
步骤102、在所述环境光强度低于预设光强度时,启动红外光源。
需要说明地,所述RGB-IR图像处理装置包括RGB-IR传感器及双通滤光片,所述RGB-IR传感器包括感光像素阵列,所述感光像素阵列中包括红外(Infrared Radiation,IR)分量像素、红(red,R)分量像素、绿(green,G)分量像素和蓝(blue,B)分量像素,上述四种分量像素以不同的排列方式呈阵列分布。
例如,所述双通滤光片可以是设置于所述感光像素阵列上方,且所述双通滤光片包括滤光单元阵列,所述滤光单元包括彩色滤光区及红外滤光区。彩色滤光区用于获取合成图像的像素的色彩信息,其能够阻挡红外线的通过,以在具有光照强度的环境下生成彩色图像,如红绿蓝(Red Green Blue,RGB)图像。红外滤光区用于使特定红外线通过;本公开实施例中,所述红外滤光区为IR滤光区,用于使850nm左右或940nm左右波长的红外线通过,以生成IR图像。双通滤光片的设置,也就使得移动终端通过一个RGB-IR传感器就能够同时获取RGB图像和IR图像,节省了移动终端的空间。
另外,所述移动终端设有红外光源,所述红外光源可以是红外发光二极管(light-emitting diode,LED)灯、红外激光灯等。所述红外光源可以是靠近所述RGB-IR图像处理装置设置,所述红外光源及所述RGB-IR图像处理装置均连接所述移动终端的处理器。
本公开实施例中,可以是在所述RGB-IR图像处理装置启动时,通过设 于移动终端的光感应器检测环境光强度是否低于预设光强度;并在环境光强度低于预设光强度时,启动红外光源。例如,当用户在夜晚或者是光强度较低的阴雨天气时,若启动所述RGB-IR图像处理装置进行人脸识别,此时移动终端会控制启动红外光源。
需要说明的是,本公开实施例也可以限定为,若检测到所述RGB-IR图像处理装置当前的处理模式是拍照模式时,则不会启动红外光源。也就是说,只有在所述RGB-IR图像处理装置当前的处理模式是人脸识别模式时,并且当前的环境光强度低于预设光强度时,才会启动红外光源。
步骤103、获取RGB-IR图像处理装置采集的人脸IR图像。
本公开实施例中,在所述RGB-IR图像处理装置当前的处理模式是人脸识别模式,且红外光源开启时,红外光经人脸反射而被RGB-IR传感器上接收,将获取的IR分量像素合成人脸IR图像。
可以理解地,所述RGB-IR传感器包括感光像素阵列,也就使得所述RGB-IR图像处理装置能够同时获取到用于合成彩色图像的R分量像素、G分量像素和B分量像素,以及用于合成红外图像的IR分量像素。本公开实施例中,在进行人脸识别时,移动终端可以是选择对获取的R分量像素、G分量像素和B分量像素不做合成处理,而只将获取的所述IR分量像素做合成处理,以合成人脸IR图像,而完成人脸识别操作。或者,移动终端也可以是根据所述感光像素阵列的排列方式,针对性地获取其中的IR分量像素。例如,当所述感光像素阵列的排列方式为:奇数行像素按R分量像素、G分量像素的顺序依次排列,偶数行像素按IR分量像素、B分量像素的顺序依次排列,移动终端可以选择只获取偶数行的像素,并对获取的B分量像素不做处理,只将获取的所述IR分量像素做合成处理,以合成人脸IR图像,而完成人脸识别操作。
步骤104、将所述人脸IR图像与预存的人脸图像进行比对,当所述人脸IR图像与预存的人脸图像匹配时,则人脸识别成功;当所述人脸IR图像与预存的人脸图像非匹配时,则人脸识别失败。
需要说明地是,所述预存的人脸图像为预先存储于移动终端的多个用户的人脸图像;当获取到所述RGB-IR图像处理装置采集的人脸IR图像,通过 预设算法将获取的所述人脸IR图像与所述预存的人脸图像进行比对,若所述人脸IR图像与所述预存的人脸图像匹配,则判定所述人脸识别匹配成功,以完成移动终端的屏幕解锁、支付等应用人脸识别功能的操作;若所述人脸IR图像与预存的人脸图像非匹配,则人脸识别失败,也就不能完成移动终端上人脸识别功能的操作,如屏幕解锁、支付等,确保移动终端的安全。
本公开实施例中,所述图像数据处理方法还可以包括:
依次读取像素阵列中的相邻的两行像素数据,并获取其中的R分量像素、G分量像素和B分量像素;
将获取的所述R分量像素、G分量像素和B分量像素合成RGB图像。
例如,在当前环境光强度大于预设光强度时,若检测到所述RGB-IR图像处理装置当前的处理模式是拍照模式,环境光经被拍照物体反射而被RGB-IR传感器接收,依次读取所述RGB-IR传感器感光像素阵列中相邻的两行像素数据,进而获取其中的R分量像素、G分量像素和B分量像素,以合成RGB图像。所述RGB图像能够显示于移动终端的显示界面,根据用户的操作存储于移动终端的存储文件,或清除。
或者,在当前环境光强度大于预设光强度时,若检测到所述RGB-IR图像处理装置当前的处理模式是人脸识别模式,环境光经人脸反射而被RGB-IR传感器接收,获取其中的R分量像素、G分量像素和B分量像素,以合成RGB人脸图像;将所述人脸RGB图像与预存的人脸图像进行比对,以完成人脸识别。也就是说,在环境光强度大于预设光强度的情况下,无需启动红外光源,通过获取人脸RGB图像来完成人脸识别;在环境光强度小于预设光强度的情况下,启动红外光源,通过获取人脸IR图像来完成人脸识别。这样,也就确保了移动终端的人脸识别功能无论在何种光强度环境下,都能完成人脸识别,保证人脸识别功能的正常使用。
本公开实施例提供的技术方案,在当前环境光强度低于预设光强度时,启动红外光源,获取RGB-IR图像处理装置采集的人脸IR图像,将所述人脸IR图像与预存的人脸图像进行比对,以完成人脸识别。这样,也就使得移动终端在光强度较弱的环境中,通过启动红外光源来获取人脸IR图像,完成人脸识别;弥补了可见光人脸识别在光强度弱的环境中匹配成功率低的缺陷, 确保了移动终端的人脸识别功能不会受环境光强度的影响,保证移动终端人脸识别功能的正常使用。
请参见图2,图2是本公开实施例提供的图像数据处理方法的另一流程图,所述图像数据处理方法应用于包括RGB-IR图像处理装置的移动终端。如图2所示,所述图像数据处理方法包括以下步骤:
步骤201、获取环境光强度。
该步骤可参照图1所示实施例中的步骤101进行实施,为避免重复,本公开实施例中对此不作赘述。
步骤202、在所述环境光强度低于预设光强度时,启动红外光源。
该步骤可参照图1所示实施例中的步骤102进行实施,为避免重复,本公开实施例中对此不作赘述。
步骤203、当确定像素阵列的排列方式为第一排列方式时,采用第一数据处理方式获取所述RGB-IR图像处理装置采集的IR分量像素,并合成人脸IR图像。
其中,所述第一排列方式为:奇数行像素包括R分量像素和G分量像素,偶数行像素包括IR分量像素和B分量像素。例如,所述感光像素阵列的排列方式还可以是:奇数行像素按R分量像素、G分量像素、G分量像素的顺序循环排列,偶数行像素按IR分量像素、B分量像素、B分量像素的顺序循环排列。或者,请参照图3,所述感光像素阵列的排列方式为:奇数行像素按R分量像素、G分量像素的顺序循环排列,偶数行像素按IR分量像素、B分量像素的顺序循环排列。
本公开实施例中,所述步骤203可以包括:
当确定所述像素阵列的排列方式为第一排列方式时,依次读取偶数行像素数据,并获取所述偶数行像素数据中的IR分量像素;
将获取的所述IR分量像素合成人脸IR图像。
可以理解地,感光像素阵列的第一排列方式中,IR分量像素只设于偶数行,对于IR分量像素的获取,则可以是依次读取偶数行像素数据,并提取其中的IR分量像素,并将提取的IR分量像素合成人脸IR图像。对于偶数行的B分量像素,可以是将其清除,或者是缓存于移动终端中。这样,无需对奇 数行像素数据进行获取和处理,节省了移动终端的数据存储空间,也提高了移动终端的图像数据处理能力。
需要说明的是,所述步骤203还可以是:当确定所述像素阵列的排列方式为第一排列方式时,依次读取一奇数行像素数据及与其相邻的一偶数行像素数据,并提取其中的IR分量像素,并将提取的IR分量像素合成人脸IR图像。将读取的R分量像素、G分量像素和B分量像素清除,或者是缓存于移动终端,不做合成处理。
步骤204、当确定像素阵列的排列方式为第二排列方式时,采用第二数据处理方式获取所述RGB-IR图像处理装置采集的IR分量像素,并合成人脸IR图像。
其中,所述第二排列方式为:奇数行像素包括R分量像素、G分量像素和IR分量像素,偶数行像素包括IR分量像素、B分量像素和R分量像素;上述分量像素可以按不同的排列方式进行排列。例如,所述感光像素阵列的排列方式可以是,奇数行像素按R分量像素、G分量像素、IR分量像素、B分量像素的顺序循环排列,所述偶数行像素按IR分量像素、B分量像素、R分量像素、G分量像素的顺序依次排列。或者,请参照图4,所述感光像素阵列的排列方式为:奇数行像素按R分量像素、G分量像素、IR分量像素、G分量像素的顺序循环排列,所述偶数行像素按IR分量像素、B分量像素、R分量像素、B分量像素的顺序依次排列。
本公开实施例中,所述步骤204可以包括:
当确定所述像素阵列的排列方式为第二排列方式时,依次读取一奇数行像素数据及与所述奇数行像素相邻的一偶数行像素数据,并获取所述奇数行像素数据及所述偶数行像素数据中的IR分量像素;
将获取的所述IR分量像素合成人脸IR图像。
可以理解地,感光像素阵列为第二排列方式时,奇数行和偶数行中均包括IR分量像素,可以按照如图4中的顺序,从上之下依次读取两行数据,也就是一奇数行像素数据及与所述奇数行像素相邻的偶数行像素数据,提取其中的IR像素分量,并将获取的所述IR像素分量合成人脸IR图像。将读取的R分量像素、G分量像素和B分量像素清除,或者是缓存于移动终端,不做 合成处理。
另外,当所述感光像素阵列的排列方式为图4中的排列方式时,所述步骤204还可以包括:依次读取奇数列像素数据,并获取所述奇数列像素数据中的IR分量像素,将获取的所述IR分量像素合成人脸IR图像。对于获取的奇数列的R分量像素,可以是将其清除,或者是缓存于移动终端中。这样,无需对偶数列像素数据进行获取和处理,节省了移动终端的数据存储空间,也提高了移动终端的图像数据处理能力。
步骤205、将所述人脸IR图像与预存的人脸图像进行比对,当所述人脸IR图像与预存的人脸图像匹配时,则人脸识别成功;当所述人脸IR图像与预存的人脸图像非匹配时,则人脸识别失败。
该步骤可参照图1所示实施例中的步骤104进行实施,为避免重复,本公开实施例中对此不作赘述。
需要说明的是,所述图像数据处理方法还可以包括:
依次读取所述像素阵列中的相邻的两行像素数据,并获取其中的R分量像素、G分量像素和B分量像素;
将获取的所述R分量像素、G分量像素和B分量像素合成RGB图像。
例如,在当前环境光强度大于预设光强度时,若检测到所述RGB-IR图像处理装置当前的处理模式是拍照模式,环境光经被拍照物体反射而被RGB-IR传感器接收,依次读取所述RGB-IR传感器感光像素阵列中相邻的两行像素数据,进而获取其中的R分量像素、G分量像素和B分量像素,以合成RGB图像。所述RGB图像能够显示于移动终端的显示界面,根据用户的操作存储于移动终端的存储文件,或清除。
或者,在当前环境光强度大于预设光强度时,若检测到所述RGB-IR图像处理装置当前的处理模式是人脸识别模式,环境光经人脸反射而被RGB-IR传感器接收,获取其中的R分量像素、G分量像素和B分量像素,以合成RGB人脸图像;将所述人脸RGB图像与预存的人脸图像进行比对,以完成人脸识别。也就是说,在环境光强度大于预设光强度的情况下,无需启动红外光源,通过获取人脸RGB图像来完成人脸识别;在环境光强度小于预设光强度的情况下,启动红外光源,通过获取人脸IR图像来完成人脸识别。
本公开实施例中,移动终端能根据感光像素阵列不同的排列方式,采用不同的数据处理方式来获取RGB-IR图像处理装置采集的IR分量像素,以合成人脸IR图像,完成人脸识别,使得移动终端的图像数据处理更有针对性,提高了移动终端的图像数据处理能力。另外,移动终端在光强度较弱的环境中,通过启动红外光源来获取人脸IR图像,完成人脸识别;确保了移动终端的人脸识别功能不会受环境光强度的影响,提高了移动终端人脸识别低的成功率。
本公开实施例还提供一种移动终端,所述移动终端包括RGB-IR图像处理装置。如图5所示,本公开实施例提供的移动终端500包括:
第一获取模块501,用于获取环境光强度;
启动模块502,用于在所述环境光强度低于预设光强度时,启动红外光源;
第二获取模块503,用于获取所述RGB-IR图像处理装置采集的人脸IR图像;
比对模块504,用于将所述人脸IR图像与预存的人脸图像进行比对,当所述人脸IR图像与预存的人脸图像匹配时,则人脸识别成功;当所述人脸IR图像与预存的人脸图像非匹配时,则人脸识别失败。
可选地,所述第二获取模块503还用于:
当确定像素阵列的排列方式为第一排列方式时,采用第一数据处理方式获取所述RGB-IR图像处理装置采集的IR分量像素,并合成人脸IR图像;
当确定像素阵列的排列方式为第二排列方式时,采用第二数据处理方式获取所述RGB-IR图像处理装置采集的IR分量像素,并合成人脸IR图像;
其中,所述第一排列方式为:奇数行像素包括R分量像素和G分量像素,偶数行像素包括IR分量像素和B分量像素;
所述第二排列方式为:奇数行像素包括R分量像素、G分量像素和IR分量像素,偶数行像素包括IR分量像素、B分量像素和R分量像素。
可选地,请参照图6,所述第二获取模块503包括:
第一获取子模块5031,用于当确定所述像素阵列的排列方式为第一排列方式时,依次读取偶数行像素数据,并获取所述偶数行像素数据中的IR分量 像素;
第一合成子模块5032,用于将获取的所述IR分量像素合成人脸IR图像。
可选地,请参照图7,所述第二获取模块503包括:
第二获取子模块5033,用于当确定所述像素阵列的排列方式为第二排列方式时,依次读取一奇数行像素数据及与所述奇数行像素相邻的一偶数行像素数据,并获取所述奇数行像素数据及所述偶数行像素数据中的IR分量像素;
第二合成子模块5034,用于将获取的所述IR分量像素合成人脸IR图像。
可选地,所述移动终端500还包括:
读取模块,用于依次读取所述像素阵列中的相邻的两行像素数据,并获取其中的R分量像素、G分量像素和B分量像素;
合成模块,用于将获取的所述R分量像素、G分量像素和B分量像素合成RGB图像。
本公开实施例提供的移动终端能够实现图1至图4的方法实施例中移动终端实现的各个过程,为避免重复,这里不再赘述。
本公开实施例中,第一获取模块501获取环境光强度,在所述环境光强度低于预设光强度时,启动模块502启动红外光源,第二获取模块503获取RGB-IR图像处理装置采集的人脸IR图像,比对模块504将所述人脸IR图像与预存的人脸图像进行比对,当所述人脸IR图像与预存的人脸图像匹配时,则人脸识别成功;当所述人脸IR图像与预存的人脸图像非匹配时,则人脸识别失败。这样,也就使得移动终端500在光强度较弱的环境中,通过启动红外光源来获取人脸IR图像,完成人脸识别;弥补了可见光人脸识别在光强度弱的环境中匹配成功率低的缺陷,确保了移动终端500的人脸识别功能不会受环境光强度的影响,保证移动终端500人脸识别功能的正常使用。
请参照图8,图8为实现本公开实施例的移动终端的另一结构图,所述移动终端能够实现图1至图4的方法实施例中移动终端实现的各个过程,并达到相同的技术效果。该移动终端800包括但不限于:射频单元801、网络模块802、音频输出单元803、输入单元804、传感器805、显示单元806、用户输入单元807、接口单元808、存储器809、处理器810、以及电源811等部件。本领域技术人员可以理解,图8中示出的移动终端结构并不构成对 移动终端的限定,移动终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本公开实施例中,移动终端包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、以及计步器等。需要说明的是,所述移动终端800包括RGB-IR图像处理装置(图未示)。
其中,处理器810,用于:
获取环境光强度;
在所述环境光强度低于预设光强度时,启动红外光源;
获取所述RGB-IR图像处理装置采集的人脸IR图像;
将所述人脸IR图像与预存的人脸图像进行比对,当所述人脸IR图像与预存的人脸图像匹配时,则人脸识别成功;当所述人脸IR图像与预存的人脸图像非匹配时,则人脸识别失败。
其中,处理器810,还用于:
当确定像素阵列的排列方式为第一排列方式时,采用第一数据处理方式获取所述RGB-IR图像处理装置采集的IR分量像素,并合成人脸IR图像;
当确定像素阵列的排列方式为第二排列方式时,采用第二数据处理方式获取所述RGB-IR图像处理装置采集的IR分量像素,并合成人脸IR图像。
其中,所述第一排列方式为:奇数行像素包括R分量像素和G分量像素,偶数行像素包括IR分量像素和B分量像素;
所述第二排列方式为:奇数行像素包括R分量像素、G分量像素和IR分量像素,偶数行像素包括IR分量像素、B分量像素和R分量像素。
其中,处理器810,还用于:
当确定所述像素阵列的排列方式为第一排列方式时,依次读取偶数行像素数据,并获取所述偶数行像素数据中的IR分量像素;
将获取的所述IR分量像素合成人脸IR图像。
其中,处理器810,还用于:
当确定所述像素阵列的排列方式为第二排列方式时,依次读取一奇数行像素数据及与所述奇数行像素相邻的一偶数行像素数据,并获取所述奇数行像素数据及所述偶数行像素数据中的IR分量像素;
将获取的所述IR分量像素合成人脸IR图像。
其中,处理器810,还用于:
依次读取所述像素阵列中的相邻的两行像素数据,并获取其中的R分量像素、G分量像素和B分量像素;
将获取的所述R分量像素、G分量像素和B分量像素合成RGB图像。
本公开实施例提供的移动终端能够实现上述图像数据处理方法实施例中移动终端实现的各个过程,为避免重复,这里不再赘述。
本公开实施例中,移动终端800获取环境光强度,在所述环境光强度低于预设光强度时,启动红外光源,获取RGB-IR图像处理装置采集的人脸IR图像,将所述人脸IR图像与预存的人脸图像进行比对,以完成人脸识别。这样,也就使得移动终端在光强度较弱的环境中,通过启动红外光源来获取人脸IR图像,完成人脸识别;弥补了可见光人脸识别在光强度弱的环境中匹配成功率低的缺陷,确保了移动终端的人脸识别功能不会受环境光强度的影响,保证移动终端人脸识别功能的正常使用。
应理解的是,本公开实施例中,射频单元801可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器810处理;另外,将上行的数据发送给基站。通常,射频单元801包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元801还可以通过无线通信系统与网络和其它设备通信。
移动终端800通过网络模块802为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元803可以将射频单元801或网络模块802接收的或者在存储器809中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元803还可以提供与移动终端800执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元803包括扬声器、蜂鸣器以及受话器等。
输入单元804用于接收音频或视频信号。输入单元804可以包括图形处理器(Graphics Processing Unit,GPU)8041和麦克风8042,图形处理器8041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静 态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元806上。经图形处理器8041处理后的图像帧可以存储在存储器809(或其它计算机可读存储介质)中或者经由射频单元801或网络模块802进行发送。麦克风8042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元801发送到移动通信基站的格式输出。
移动终端800还包括至少一种传感器805,比如光传感器、运动传感器以及其它传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板8061的亮度,接近传感器可在移动终端800移动到耳边时,关闭显示面板8061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别移动终端姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器805还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元806用于显示由用户输入的信息或提供给用户的信息。显示单元806可包括显示面板8061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板8061。
用户输入单元807可用于接收输入的数字或字符信息,以及产生与移动终端800的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元807包括触控面板8071以及其它输入设备8072。触控面板8071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板8071上或在触控面板8071附近的操作)。触控面板8071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器810,接收处理器810发来的命令并加以执行。此外,可 以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板8071。除了触控面板8071,用户输入单元807还可以包括其它输入设备8072。具体地,其它输入设备8072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板8071可覆盖在显示面板8061上,当触控面板8071检测到在其上或附近的触摸操作后,传送给处理器810以确定触摸事件的类型,随后处理器810根据触摸事件的类型在显示面板8061上提供相应的视觉输出。虽然在图8中,触控面板8071与显示面板8061是作为两个独立的部件来实现移动终端800的输入和输出功能,但是在某些实施例中,可以将触控面板8071与显示面板8061集成而实现移动终端800的输入和输出功能,具体此处不做限定。
接口单元808为外部装置与移动终端800连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(input/output,I/O)端口、视频I/O端口、耳机端口等等。接口单元808可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端800内的一个或多个元件或者可以用于在移动终端800和外部装置之间传输数据。
存储器809可用于存储软件程序以及各种数据。存储器809可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器809可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其它易失性固态存储器件。
处理器810是移动终端800的控制中心,利用各种接口和线路连接整个移动终端800的各个部分,通过运行或执行存储在存储器809内的软件程序和/或模块,以及调用存储在存储器809内的数据,执行移动终端800的各种功能和处理数据,从而对移动终端800进行整体监控。处理器810可包括一个或多个处理单元;可选的,处理器810可集成应用处理器和调制解调处理 器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器810中。
移动终端800还可以包括给各个部件供电的电源811(比如电池),可选的,电源811可以通过电源管理系统与处理器810逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,移动终端800包括一些未示出的功能模块,在此不再赘述。
可选的,本公开实施例还提供一种移动终端,包括处理器,存储器,存储在存储器上并可在所述处理器上运行的计算机程序,该计算机程序被处理器执行时实现上述图像数据处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本公开实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述图像数据处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其它变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其它要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描 述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来控制相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、ROM或RAM等。
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限 于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以权利要求的保护范围为准。

Claims (12)

  1. 一种图像数据处理方法,应用于移动终端,所述移动终端包括红绿蓝红外RGB-IR图像处理装置;所述方法包括:
    获取环境光强度;
    在所述环境光强度低于预设光强度时,启动红外光源;
    获取所述RGB-IR图像处理装置采集的人脸红外IR图像;
    将所述人脸IR图像与预存的人脸图像进行比对,当所述人脸IR图像与预存的人脸图像匹配时,则人脸识别成功;当所述人脸IR图像与预存的人脸图像非匹配时,则人脸识别失败。
  2. 根据权利要求1所述的图像数据处理方法,其中,所述获取所述RGB-IR图像处理装置采集的人脸IR图像的步骤,包括:
    当确定像素阵列的排列方式为第一排列方式时,采用第一数据处理方式获取所述RGB-IR图像处理装置采集的IR分量像素,并合成人脸IR图像;
    当确定像素阵列的排列方式为第二排列方式时,采用第二数据处理方式获取所述RGB-IR图像处理装置采集的IR分量像素,并合成人脸IR图像;
    其中,所述第一排列方式为:奇数行像素包括R分量像素和G分量像素,偶数行像素包括IR分量像素和B分量像素;
    所述第二排列方式为:奇数行像素包括R分量像素、G分量像素和IR分量像素,偶数行像素包括IR分量像素、B分量像素和R分量像素。
  3. 根据权利要求2所述的图像数据处理方法,其中,所述当确定像素阵列的排列方式为第一排列方式时,采用第一数据处理方式获取所述RGB-IR图像处理装置采集的IR分量像素,并合成人脸IR图像的步骤,包括:
    当确定所述像素阵列的排列方式为第一排列方式时,依次读取偶数行像素数据,并获取所述偶数行像素数据中的IR分量像素;
    将获取的所述IR分量像素合成人脸IR图像。
  4. 根据权利要求2所述的图像数据处理方法,其中,所述当确定像素阵列的排列方式为第二排列方式时,采用第二数据处理方式获取所述RGB-IR图像处理装置采集的IR分量像素,并合成人脸IR图像的步骤,包括:
    当确定所述像素阵列的排列方式为第二排列方式时,依次读取一奇数行像素数据及与所述奇数行像素相邻的一偶数行像素数据,并获取所述奇数行像素数据及所述偶数行像素数据中的IR分量像素;
    将获取的所述IR分量像素合成人脸IR图像。
  5. 根据权利要求2至4中任一项所述的图像数据处理方法,还包括:
    依次读取所述像素阵列中的相邻的两行像素数据,并获取其中的R分量像素、G分量像素和B分量像素;
    将获取的所述R分量像素、G分量像素和B分量像素合成RGB图像。
  6. 一种移动终端,包括:
    红绿蓝红外RGB-IR图像处理装置;
    第一获取模块,用于获取环境光强度;
    启动模块,用于在所述环境光强度低于预设光强度时,启动红外光源;
    第二获取模块,用于获取所述RGB-IR图像处理装置采集的人脸IR图像;
    比对模块,用于将所述人脸红外IR图像与预存的人脸图像进行比对,当所述人脸IR图像与预存的人脸图像匹配时,则人脸识别成功;当所述人脸IR图像与预存的人脸图像非匹配时,则人脸识别失败。
  7. 根据权利要求6所述的移动终端,其中,所述第二获取模块还用于:
    当确定像素阵列的排列方式为第一排列方式时,采用第一数据处理方式获取所述RGB-IR图像处理装置采集的IR分量像素,并合成人脸IR图像;
    当确定像素阵列的排列方式为第二排列方式时,采用第二数据处理方式获取所述RGB-IR图像处理装置采集的IR分量像素,并合成人脸IR图像;
    其中,所述第一排列方式为:奇数行像素包括R分量像素和G分量像素,偶数行像素包括IR分量像素和B分量像素;
    所述第二排列方式为:奇数行像素包括R分量像素、G分量像素和IR分量像素,偶数行像素包括IR分量像素、B分量像素和R分量像素。
  8. 根据权利要求7所述的移动终端,其中,所述第二获取模块包括:
    第一获取子模块,用于当确定所述像素阵列的排列方式为第一排列方式时,依次读取偶数行像素数据,并获取所述偶数行像素数据中的IR分量像素;
    第一合成子模块,用于将获取的所述IR分量像素合成人脸IR图像。
  9. 根据权利要求7所述的移动终端,其中,所述第二获取模块包括:
    第二获取子模块,用于当确定所述像素阵列的排列方式为第二排列方式时,依次读取一奇数行像素数据及与所述奇数行像素相邻的一偶数行像素数据,并获取所述奇数行像素数据及所述偶数行像素数据中的IR分量像素;
    第二合成子模块,用于将获取的所述IR分量像素合成人脸IR图像。
  10. 根据权利要求7至9中任一项所述的移动终端,还包括:
    读取模块,用于依次读取所述像素阵列中的相邻的两行像素数据,并获取其中的R分量像素、G分量像素和B分量像素;
    合成模块,用于将获取的所述R分量像素、G分量像素和B分量像素合成RGB图像。
  11. 一种移动终端,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至5中任一项所述的图像数据处理方法的步骤。
  12. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至5中任一项所述的图像数据处理方法的步骤。
PCT/CN2019/094696 2018-07-16 2019-07-04 图像数据处理方法及移动终端 WO2020015538A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810775766.5 2018-07-16
CN201810775766.5A CN108960179A (zh) 2018-07-16 2018-07-16 一种图像数据处理方法及移动终端

Publications (1)

Publication Number Publication Date
WO2020015538A1 true WO2020015538A1 (zh) 2020-01-23

Family

ID=64481341

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/094696 WO2020015538A1 (zh) 2018-07-16 2019-07-04 图像数据处理方法及移动终端

Country Status (2)

Country Link
CN (1) CN108960179A (zh)
WO (1) WO2020015538A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113673284A (zh) * 2020-05-15 2021-11-19 深圳市光鉴科技有限公司 深度相机抓拍方法、系统、设备及介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960179A (zh) * 2018-07-16 2018-12-07 维沃移动通信有限公司 一种图像数据处理方法及移动终端
CN112001724A (zh) * 2019-05-27 2020-11-27 阿里巴巴集团控股有限公司 数据处理方法、装置、设备和存储介质
CN112001208B (zh) * 2019-05-27 2024-08-27 虹软科技股份有限公司 用于车辆盲区的目标检测方法、装置和电子设备
CN112584109A (zh) * 2019-09-30 2021-03-30 长城汽车股份有限公司 车用摄像装置及车用图像处理方法
CN110889102B (zh) * 2019-11-27 2023-06-20 Tcl移动通信科技(宁波)有限公司 图像解锁方法、装置、计算机可读存储介质及终端
CN111246186A (zh) * 2020-01-21 2020-06-05 重庆金康新能源汽车有限公司 车载摄像机系统及方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372486A (zh) * 2016-09-29 2017-02-01 郑州云海信息技术有限公司 一种鼠标、面部识别系统和方法
CN107205139A (zh) * 2017-06-28 2017-09-26 重庆中科云丛科技有限公司 多通道采集的图像传感器及采集方法
CN107534742A (zh) * 2015-07-09 2018-01-02 华为技术有限公司 成像方法、图像传感器以及成像设备
CN107967418A (zh) * 2017-11-28 2018-04-27 维沃移动通信有限公司 人脸识别方法及移动终端
CN108282644A (zh) * 2018-02-14 2018-07-13 北京飞识科技有限公司 一种单摄像头成像方法及装置
CN108960179A (zh) * 2018-07-16 2018-12-07 维沃移动通信有限公司 一种图像数据处理方法及移动终端

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9392262B2 (en) * 2014-03-07 2016-07-12 Aquifi, Inc. System and method for 3D reconstruction using multiple multi-channel cameras
CN104463112B (zh) * 2014-11-27 2018-04-06 深圳市科葩信息技术有限公司 一种采用rgb+ir图像传感器进行生物识别的方法及识别系统
CN106218584A (zh) * 2016-08-16 2016-12-14 张家港长安大学汽车工程研究院 一种基于红外及人脸识别技术的车辆防盗系统
CN107506752A (zh) * 2017-09-18 2017-12-22 艾普柯微电子(上海)有限公司 人脸识别装置及方法
CN207491128U (zh) * 2017-11-30 2018-06-12 北京中科虹霸科技有限公司 一种rgb+ir图像采集设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107534742A (zh) * 2015-07-09 2018-01-02 华为技术有限公司 成像方法、图像传感器以及成像设备
CN106372486A (zh) * 2016-09-29 2017-02-01 郑州云海信息技术有限公司 一种鼠标、面部识别系统和方法
CN107205139A (zh) * 2017-06-28 2017-09-26 重庆中科云丛科技有限公司 多通道采集的图像传感器及采集方法
CN107967418A (zh) * 2017-11-28 2018-04-27 维沃移动通信有限公司 人脸识别方法及移动终端
CN108282644A (zh) * 2018-02-14 2018-07-13 北京飞识科技有限公司 一种单摄像头成像方法及装置
CN108960179A (zh) * 2018-07-16 2018-12-07 维沃移动通信有限公司 一种图像数据处理方法及移动终端

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113673284A (zh) * 2020-05-15 2021-11-19 深圳市光鉴科技有限公司 深度相机抓拍方法、系统、设备及介质
CN113673284B (zh) * 2020-05-15 2023-08-08 深圳市光鉴科技有限公司 深度相机抓拍方法、系统、设备及介质

Also Published As

Publication number Publication date
CN108960179A (zh) 2018-12-07

Similar Documents

Publication Publication Date Title
WO2020015538A1 (zh) 图像数据处理方法及移动终端
US20210150171A1 (en) Object recognition method and mobile terminal
US11778304B2 (en) Shooting method and terminal
CN107977144B (zh) 一种截屏处理方法及移动终端
CN110969981B (zh) 屏幕显示参数调节方法及电子设备
CN109934137A (zh) 一种光电指纹识别装置、终端及指纹识别方法
WO2019144956A1 (zh) 图像传感器、镜头模组、移动终端、人脸识别方法及装置
WO2019174628A1 (zh) 拍照方法及移动终端
CN108777766B (zh) 一种多人拍照方法、终端及存储介质
CN108712603B (zh) 一种图像处理方法及移动终端
CN108564015B (zh) 一种指纹识别方法及移动终端
CN109240577B (zh) 一种截屏方法及终端
US11463642B2 (en) Image sensor including pixel array and mobile terminal
CN107845057A (zh) 一种拍照预览方法及移动终端
US11863901B2 (en) Photographing method and terminal
CN109544445B (zh) 一种图像处理方法、装置及移动终端
CN111083374B (zh) 滤镜添加方法及电子设备
US11996421B2 (en) Image sensor, mobile terminal, and image capturing method
CN110826438A (zh) 一种显示方法及电子设备
CN110175259B (zh) 图片显示方法、可穿戴设备及计算机可读存储介质
CN109272549B (zh) 一种红外热点的位置确定方法及终端设备
CN112308771B (zh) 一种图像处理方法、装置及电子设备
CN110049253B (zh) 一种对焦控制方法、设备及计算机可读存储介质
CN110248050B (zh) 一种摄像头模组及移动终端
CN111696051A (zh) 人像修复方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19836991

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19836991

Country of ref document: EP

Kind code of ref document: A1