CN111090104B - Imaging processing method and electronic device - Google Patents

Imaging processing method and electronic device Download PDF

Info

Publication number
CN111090104B
CN111090104B CN201911368178.0A CN201911368178A CN111090104B CN 111090104 B CN111090104 B CN 111090104B CN 201911368178 A CN201911368178 A CN 201911368178A CN 111090104 B CN111090104 B CN 111090104B
Authority
CN
China
Prior art keywords
sub
pixel array
pixel
image
receiving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911368178.0A
Other languages
Chinese (zh)
Other versions
CN111090104A (en
Inventor
成通
林华鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911368178.0A priority Critical patent/CN111090104B/en
Publication of CN111090104A publication Critical patent/CN111090104A/en
Priority to PCT/CN2020/139208 priority patent/WO2021129776A1/en
Application granted granted Critical
Publication of CN111090104B publication Critical patent/CN111090104B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Studio Devices (AREA)
  • Solid State Image Pick-Up Elements (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

The invention discloses an imaging processing method and electronic equipment, wherein the method comprises the following steps: acquiring a first target position of a target object in a first image, wherein the first image is generated by a pixel array of a TOF sensor module in an open state; determining a first sub-pixel array corresponding to the first target location in a sub-pixel array list, the sub-pixel array list including a plurality of sub-pixel arrays of the pixel array; and closing other sub-pixel arrays except the first sub-pixel array, and updating the first image based on the first sub-pixel array. Therefore, the pixel array of the TOF sensor module is divided into a plurality of sub-pixel arrays in advance, and only part of necessary sub-pixel arrays are started in the imaging processing process of the target object, so that the imaging of the target object is realized, and the power consumption of the TOF sensor module can be effectively reduced.

Description

Imaging processing method and electronic device
Technical Field
The present invention relates to the field of terminals, and in particular, to an imaging processing method and an electronic device.
Background
A TOF (Time of flight) sensor module may continuously transmit light pulses to a target object through a pixel array of a transmitting sensor, then receive light pulses returned from the target object with a pixel array of a receiving sensor, obtain a distance of the target object by detecting a Time of flight (round trip) of the light pulses, and generate a depth image or a 3D (Dimensions) image through measured points.
When the TOF sensor module carries out imaging processing on a target object at present, all pixel arrays of an emission sensor and a receiving sensor are in working states, and power consumption is high.
Therefore, there is a need for a lower power consumption imaging processing scheme.
Disclosure of Invention
The embodiment of the invention aims to provide an imaging processing method and electronic equipment, which are used for solving the problem that a TOF sensor module has high power consumption in the process of imaging processing of an object.
In a first aspect, a method of imaging processing is provided, the method comprising:
acquiring a first target position of a target object in a first image, wherein the first image is generated by a pixel array of a TOF sensor module in an open state;
determining a first sub-pixel array corresponding to the first target location in a sub-pixel array list, the sub-pixel array list including a plurality of sub-pixel arrays of the pixel array;
and closing other sub-pixel arrays except the first sub-pixel array, and updating the first image based on the first sub-pixel array.
In a second aspect, an electronic device is provided, the electronic device comprising:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first target position of a target object in a first image, and the first image is generated by a pixel array of a TOF sensor module in an opening state;
a determining module to determine a first sub-pixel array in a sub-pixel array list corresponding to the first target location, the sub-pixel array list including a plurality of sub-pixel arrays of the pixel array;
and the closing module is used for closing other sub-pixel arrays except the first sub-pixel array and updating the first image based on the first sub-pixel array.
In a third aspect, an electronic device is provided, which includes: a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the method according to the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the method according to the first aspect.
In the embodiment of the invention, the pixel array of the TOF sensor module is divided into a plurality of sub-pixel arrays in advance, and only part of necessary sub-pixel arrays are started in the imaging process of the target object, so that the imaging of the target object is realized, and the power consumption of the TOF sensor module can be effectively reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic flow diagram of an imaging processing method provided by one embodiment of the invention;
FIG. 2a is a schematic diagram of sub-pixel array partitioning of an emission sensor of a TOF sensor module according to an embodiment of the present disclosure;
FIG. 2b is a schematic diagram of sub-pixel array partitioning of an emission sensor of a TOF sensor module according to another embodiment of the present disclosure;
FIG. 2c is a schematic diagram of sub-pixel array partitioning of a receiving sensor of a TOF sensor module according to an embodiment of the present disclosure;
FIG. 2d is a schematic diagram of sub-pixel array partitioning of a receiving sensor of a TOF sensor module according to another embodiment of the present disclosure;
FIG. 2e is a schematic diagram of a first image and a second image containing a target object according to an embodiment of the present invention;
FIG. 2f is a schematic diagram of a third image containing a target object according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to yet another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Various embodiments of the present invention are described in detail below with reference to the following figures:
fig. 1 is a schematic flowchart of an imaging processing method according to an embodiment of the present invention, which may be executed by an electronic device, and referring to fig. 1, the method may specifically include the following steps:
step 102: and acquiring a first target position of the target object in a first image, wherein the first image is generated by a pixel array of the TOF sensor module in an opening state.
Wherein the pixel arrays of the TOF sensor module comprise the pixel arrays of the emission sensors as shown in fig. 2a and 2b, and the pixel arrays of the reception sensors as shown in fig. 2c to 2 f; the target object may be an object located within a field angle overlapping region of the transmitting sensor and the receiving sensor, or may be an object located outside the field angle overlapping region; the pixel array of the emission sensor includes a large number of Laser diodes, which can convert an electrical signal into an optical pulse signal and emit the optical pulse signal to a target object, and the emission sensor may be a Vertical Cavity Surface Emitting Laser (VCSEL) chip, or a distributed bragg reflector Laser diode, a grating-coupled sampling reflection Laser diode, or the like.
The pixel array of the receiving sensor comprises a large number of photodiodes, and can receive optical pulse signals reflected by a target object, convert the optical pulse signals into electric signals, and generate a digital signal matrix (namely an image) through an amplifying circuit and an analog-to-digital conversion circuit; the receiving sensor can be a chip similar to a Complementary Metal Oxide Semiconductor (CMOS) chip, and the difference between the chip and the CMOS chip is that a higher quantum efficiency in an infrared band is realized through a circuit design (i.e., the photoelectric conversion capability of the chip is stronger), and the exposure mode of the chip is global exposure; the first image is an image containing the target object generated by a pixel array of the receiving sensor.
It should be noted that, one implementation of step 102 may be:
step 1021: when the TOF sensor module is monitored to be started, starting a pixel array of an emission sensor and a pixel array of a receiving sensor of the TOF sensor module;
step 1022: based on the pixel array of the emission sensor and the pixel array of the reception sensor, a first image containing a target object is generated, and a first target position of the target object in the first image is acquired.
Step 104: determining a first sub-pixel array in a sub-pixel array list corresponding to the first target location, the sub-pixel array list including a plurality of sub-pixel arrays of the pixel array.
Wherein the sub-pixel array list comprises: a first sub-pixel array list including a plurality of sub-pixel arrays obtained by dividing the pixel array of the receiving sensor of the TOF sensor module (two sub-pixel arrays occupying the A1 region and the B1 region of the pixel array, respectively, as shown in fig. 2C), or four sub-pixel arrays occupying the A1 region, the B1 region, the C1 region, and the D1 region of the pixel array, respectively, as shown in fig. 2D to 2 f), and a second sub-pixel array list including a plurality of sub-pixel arrays obtained by dividing the pixel array of the emitting sensor of the TOF sensor module (two sub-pixel arrays occupying the a region and the B region of the pixel array, respectively, as shown in fig. 2 a; four sub-pixel arrays occupying the a region, the B region, the C region, and the D region, respectively, as shown in fig. 2B).
It should be noted that, one implementation manner of step 104 may be:
step 1041: determining a first receiving sub-pixel array corresponding to the first target position in the first sub-pixel array list, wherein the first receiving sub-pixel array is used for receiving the light pulse signals reflected by the target object;
step 1042: determining a first emitting sub-pixel array corresponding to the first receiving sub-pixel array in the second sub-pixel array list, wherein the first emitting sub-pixel array is used for emitting the light pulse signals to the target object.
Referring to fig. 2B and 2e, assuming that the emitting sensor is a VCSEL chip, the pixel array of the emitting sensor is divided into four sub-pixel arrays of a region a, a region B, a region C and a region D, the pixel array of the receiving sensor is divided into four sub-pixel arrays of a region A1, a region B1, a region C1 and a region D1, and the target object is an object in an overlapping area of field angles of the emitting sensor and the receiving sensor, step 1021, step 1022, step 1041 and step 1042 may be specifically exemplified by:
after monitoring that the TOF sensor module is converted from a closed state to an open state, the electronic equipment opens the sub-pixel arrays of the area A, the area B, the area C and the area D of the emission sensor and the sub-pixel arrays of the area A1, the area B1, the area C1 and the area D1 of the receiving sensor; the method comprises the steps that sub-pixel arrays of an area A, an area B, an area C and an area D of an emission sensor emit light pulse signals to a target object F, the sub-pixel arrays of the area A, the area B1, the area C1 and the area D1 of the reception sensor receive the light pulse signals emitted by the target object and convert the light pulse signals into electric signals, a digital signal matrix (namely a first image containing the target object F shown in a left diagram in figure 2 e) is generated through an amplifying circuit and an analog-to-digital conversion circuit in a logic circuit area, an electronic device determines a first target position based on coordinates of the target object F in the first image, determines a first receiving sub-pixel array needing to be started by a receiving sensor corresponding to the first target position as the sub-pixel array of the area B1, and determines a first emitting sub-pixel array corresponding to the first receiving sub-pixel array as the sub-pixel array of the area B based on a pre-established corresponding relationship between the receiving sub-pixel array and the emitting sub-pixel array.
Therefore, the corresponding first receiving sub-pixel array is determined based on the first target position, and the corresponding first emitting sub-pixel array is determined based on the first receiving sub-pixel array, so that the necessary sub-pixel array capable of ensuring the complete imaging of the target object can be determined more accurately and efficiently.
Step 106: and closing other sub-pixel arrays except the first sub-pixel array, and updating the first image based on the first sub-pixel array.
It should be noted that, one implementation manner of turning off other sub-pixel arrays and updating the image in step 106 may be:
step 1061: turning off the first receiving sub-pixel array and other sub-pixel arrays except the first transmitting sub-pixel array;
step 1062: updating the first image based on the first receive subpixel array and the first transmit subpixel array.
Referring to fig. 2B and fig. 2e, assuming that the first receiving sub-pixel array is a sub-pixel array in the B1 region, and the first emitting sub-pixel array is a sub-pixel array in the B region, step 1061 and step 1062 may be specifically exemplified by:
the electronic device closes the sub-pixel arrays of the area A1, the area C1 and the area D1 and the sub-pixel arrays of the area A, the area C and the area D, the sub-pixel array of the area B of the emission sensor emits light pulse signals to a target object F, the sub-pixel array of the area B1 of the reception sensor receives the light pulse signals emitted by the target object and converts the light pulse signals into electric signals, and a first image containing the target object F shown in the left diagram in the figure 2e is updated through an amplifying circuit and an analog-to-digital conversion circuit in a logic circuit area.
Therefore, other non-necessary sub-pixel arrays are closed, and the first image is updated based on the necessary sub-pixel array, so that the power consumption of the TOF sensor module can be effectively reduced on the basis of ensuring the complete imaging of the target object.
It should be noted that, one implementation manner of the image update in step 106 may be:
step 1061': generating a second image based on the first sub-pixel array, wherein the second image is a local image containing the target object in the first image;
step 1062': updating the position of the target object in the second image;
step 1063': if the target object is monitored to move out of the second image, starting other sub-pixel arrays except the first sub-pixel array;
step 1064': generating a third image based on the sub-pixel array list.
Referring to fig. 2B, 2e and 2f, assuming that the first receiving sub-pixel array is a sub-pixel array of the B1 region and the first emitting sub-pixel array is a sub-pixel array of the B region, steps 1061 'to 1064' may be specifically exemplified by:
the sub-pixel array in the B area of the transmitting sensor transmits a light pulse signal to a target object F, the sub-pixel array in the B1 area of the receiving sensor receives the light pulse signal transmitted by the target object and converts the light pulse signal into an electric signal, and a second image shown as a right image in a figure 2e is generated through an amplifying circuit and an analog-to-digital conversion circuit in a logic circuit area, wherein the second image is a partial image containing the target object F in a first image shown as a left image in the figure 2 e; the electronic equipment updates the position of the target object 'F' in the second image once every preset time, and restarts the sub-pixel arrays of the area A1, the area C1 and the area D1 and the sub-pixel arrays of the area A, the area C and the area D if the target object 'F' is monitored to move out of the second image; the sub-pixel arrays of the a, B, C and D areas of the transmitting sensor transmit light pulse signals to the target object "F", the sub-pixel arrays of the A1, B1, C1 and D1 areas of the receiving sensor receive the light pulse signals transmitted by the target object and convert the light pulse signals into electrical signals, and the amplification circuit and the analog-to-digital conversion circuit of the logic circuit area regenerate a digital signal matrix (i.e., a third image containing the target object "F" as shown in fig. 2F).
Based on this, when it is monitored that the target object moves out of the second image, all the sub-pixel arrays in the sub-pixel array list are started to generate a third image containing the target object, so that the imaging integrity of the target object can be ensured.
Further, after step 106, the method further includes a step of re-determining the necessary sub-pixel array, and one implementation manner of the step may be:
s1: acquiring a third target position of the target object in the third image;
s2: determining a third sub-pixel array corresponding to the third target position in the sub-pixel array list, wherein the third sub-pixel array is different from the first sub-pixel array;
s3: and closing other sub-pixel arrays except the third sub-pixel array, and updating the third image based on the third sub-pixel array.
It should be noted that, one implementation manner of S2 may be:
s21: determining a third receiving sub-pixel array in the first sub-pixel array list corresponding to the third target position;
s22: determining a third transmit sub-pixel array in the second list of sub-pixel arrays corresponding to the third receive sub-pixel array.
Referring to fig. 2f, s1, S21, S22, and S3 may be specifically exemplified by:
the electronic equipment determines a third target position based on the coordinates of the target object 'F' in the third image; the electronic equipment determines that a third receiving sub-pixel array, which is required to be started, of a receiving sensor corresponding to a third target position is a sub-pixel array of a C1 area, and determines that a third emitting sub-pixel array corresponding to the third receiving sub-pixel array is the sub-pixel array of the C area based on a pre-established corresponding relation between the receiving sub-pixel array and the emitting sub-pixel array; the electronic device turns off the sub-pixel arrays of the areas A1, B1 and D1 and the sub-pixel arrays of the areas a, B and D, the sub-pixel array of the area C of the transmitting sensor transmits an optical pulse signal to the target object "F", the sub-pixel array of the area C of the receiving sensor receives the optical pulse signal transmitted by the target object and converts the optical pulse signal into an electrical signal, and a third image containing the target object "F" as shown in fig. 2F is updated through an amplifying circuit and an analog-to-digital conversion circuit of the logic circuit area.
Therefore, the third sub-pixel array corresponding to the updated third target position of the target object is determined, other non-necessary sub-pixel arrays are closed, and the first image is updated based on the necessary sub-pixel array, so that the power consumption of the TOF sensor module can be reduced on the basis of ensuring the complete imaging of the target object.
Therefore, in the embodiment, the pixel array of the TOF sensor module is divided into a plurality of sub-pixel arrays in advance, and only part of necessary sub-pixel arrays are turned on in the imaging process of the target object, so that the imaging of the target object is realized, and the power consumption of the TOF sensor module can be effectively reduced.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and referring to fig. 3, the electronic device may specifically include an obtaining module 302, a determining module 304, and a closing module 306, where:
an obtaining module 302, configured to obtain a first target position of a target object in a first image, where the first image is generated by a pixel array of a TOF sensor module in an on state;
a determining module 304, configured to determine a first sub-pixel array corresponding to the first target location in a sub-pixel array list, where the sub-pixel array list includes a plurality of sub-pixel arrays of the pixel array;
a closing module 306, configured to close other sub-pixel arrays except the first sub-pixel array, and update the first image based on the first sub-pixel array.
Optionally, the sub-pixel array list includes: the TOF sensor module comprises a first sub-pixel array list and a second sub-pixel array list, wherein the first sub-pixel array list comprises a plurality of sub-pixel arrays obtained by dividing pixel arrays of receiving sensors of the TOF sensor module, and the second sub-pixel array list comprises a plurality of sub-pixel arrays obtained by dividing pixel arrays of transmitting sensors of the TOF sensor module;
wherein, the determining module 304 includes:
a first determining sub-module, configured to determine a first receiving sub-pixel array corresponding to the first target location in the first sub-pixel array list, where the first receiving sub-pixel array is configured to receive a light pulse signal reflected by the target object;
a second determining sub-module, configured to determine a first emitting sub-pixel array in the second sub-pixel array list, where the first emitting sub-pixel array corresponds to the first receiving sub-pixel array, and the first emitting sub-pixel array is used to emit the light pulse signal to the target object.
Optionally, the closing module 306 includes:
a turn-off sub-module for turning off the first receiving sub-pixel array and other sub-pixel arrays except the first transmitting sub-pixel array;
an update sub-module for updating the first image based on the first receive sub-pixel array and the first transmit sub-pixel array.
Optionally, the shutdown module 306 includes:
a first generation sub-module, configured to generate a second image based on the first sub-pixel array, where the second image is a local image that includes the target object in the first image;
an updating sub-module for updating the position of the target object in the second image;
the starting sub-module is used for starting other sub-pixel arrays except the first sub-pixel array if the situation that the target object moves out of the second image is monitored;
and the second generation submodule is used for generating a third image based on the sub-pixel array list.
Optionally, the electronic device further includes:
the second acquisition module is used for acquiring a third target position of the target object in the third image;
a second determining module, configured to determine a third sub-pixel array corresponding to the third target location in the sub-pixel array list, where the third sub-pixel array is different from the first sub-pixel array;
and the second closing module is used for closing other sub-pixel arrays except the third sub-pixel array and updating the third image based on the third sub-pixel array.
Therefore, in the embodiment, the pixel array of the TOF sensor module is divided into a plurality of sub-pixel arrays in advance, and only part of necessary sub-pixel arrays are turned on in the imaging process of the target object, so that the imaging of the target object is realized, and the power consumption of the TOF sensor module can be effectively reduced.
The device provided by the embodiment of the present invention can implement each process implemented by the device in the method embodiments of fig. 1 to fig. 2f, and is not described herein again to avoid repetition. Further, it should be noted that, among the respective components of the apparatus of the present invention, the components thereof are logically divided according to the functions to be realized, but the present invention is not limited thereto, and the respective components may be newly divided or combined as necessary.
Figure 4 is a schematic diagram of a hardware configuration of an electronic device implementing various embodiments of the invention,
the electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power supply 411. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 4 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The radio frequency unit 401 is configured to acquire a first target position of a target object in a first image, where the first image is generated by a pixel array of the TOF sensor module in an on state;
a processor 410 configured to determine a first sub-pixel array in a sub-pixel array list corresponding to the first target location, the sub-pixel array list comprising a plurality of sub-pixel arrays of the pixel array;
and closing other sub-pixel arrays except the first sub-pixel array, and updating the first image based on the first sub-pixel array.
The pixel array of the TOF sensor module is divided into a plurality of sub-pixel arrays in advance, and only part of necessary sub-pixel arrays are started in the imaging processing process of the target object, so that the imaging of the target object is realized, and the power consumption of the TOF sensor module can be effectively reduced.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and sending signals during a process of sending and receiving information or a call, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 410; in addition, the uplink data is transmitted to the base station. Typically, radio unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio unit 401 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 402, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output related to a specific function performed by the electronic apparatus 400 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive audio or video signals. The input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphic processor 4041 may be stored in the memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound, and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 401 in case of the phone call mode.
The electronic device 400 also includes at least one sensor 405, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 4061 and/or the backlight when the electronic apparatus 400 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 405 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 406 is used to display information input by the user or information provided to the user. The Display unit 406 may include a Display panel 4061, and the Display panel 4061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. Touch panel 4071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 4071 using a finger, a stylus, or any suitable object or attachment). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 410, receives a command from the processor 410, and executes the command. In addition, the touch panel 4071 can be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 4071, the user input unit 407 may include other input devices 4072. Specifically, the other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 4071 can be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 410 to determine the type of the touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of the touch event. Although in fig. 4, the touch panel 4071 and the display panel 4061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 4071 and the display panel 4061 may be integrated to implement the input and output functions of the electronic device, and the implementation is not limited herein.
The interface unit 408 is an interface for connecting an external device to the electronic apparatus 400. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 400 or may be used to transmit data between the electronic apparatus 400 and an external device.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 409 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 410 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 409 and calling data stored in the memory 409, thereby performing overall monitoring of the electronic device. Processor 410 may include one or more processing units; optionally, the processor 410 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The electronic device 400 may further include a power supply 411 (e.g., a battery) for supplying power to various components, and optionally, the power supply 411 may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 400 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements each process of the imaging processing method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the imaging processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention or portions thereof contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

1. An imaging processing method, comprising:
acquiring a first target position of a target object in a first image, wherein the first image is generated by a pixel array of an emission sensor and a pixel array of a receiving sensor of a TOF sensor module in an on state;
determining a first sub-pixel array of an emitting sensor and a receiving sensor in a sub-pixel array list corresponding to the first target location, the sub-pixel array list comprising a plurality of sub-pixel arrays of the pixel array;
turning off other sub-pixel arrays except the first sub-pixel array, and updating the first image based on the first sub-pixel array, wherein the updated first image includes a complete imaging of the target object,
wherein determining a first sub-pixel array of transmit sensors and receive sensors in the sub-pixel array list corresponding to the first target location comprises: determining a first receiving sub-pixel array which is required to be started by a receiving sensor corresponding to the first target position in a sub-pixel array list, and determining a first transmitting sub-pixel array corresponding to the first receiving sub-pixel array based on a pre-established corresponding relation between the receiving sub-pixel array and the transmitting sub-pixel array;
the updating the first image based on the first array of subpixels includes: generating a second image based on the first sub-pixel array, wherein the second image is a local image containing the target object in the first image; updating the position of the target object in the second image; if the target object is monitored to move out of the second image, starting other sub-pixel arrays except the first sub-pixel array; generating a third image based on the sub-pixel array list.
2. The method of claim 1, wherein the sub-pixel array list comprises: the TOF sensor module comprises a first sub-pixel array list and a second sub-pixel array list, wherein the first sub-pixel array list comprises a plurality of sub-pixel arrays obtained by dividing pixel arrays of receiving sensors of the TOF sensor module, and the second sub-pixel array list comprises a plurality of sub-pixel arrays obtained by dividing pixel arrays of transmitting sensors of the TOF sensor module;
wherein determining a first sub-pixel array of transmit and receive sensors in the sub-pixel array list corresponding to the first target location comprises:
determining a first receiving sub-pixel array corresponding to the first target position in the first sub-pixel array list, wherein the first receiving sub-pixel array is used for receiving the light pulse signals reflected by the target object;
determining a first emitting sub-pixel array corresponding to the first receiving sub-pixel array in the second sub-pixel array list, wherein the first emitting sub-pixel array is used for emitting the light pulse signals to the target object.
3. The method of claim 2, the turning off other sub-pixel arrays than the first sub-pixel array and updating the first image based on the first sub-pixel array, comprising:
turning off the first receiving sub-pixel array and other sub-pixel arrays except the first transmitting sub-pixel array;
updating the first image based on the first receive subpixel array and the first transmit subpixel array.
4. The method of claim 1, further comprising, after said generating a third image based on said list of subpixel arrays:
acquiring a third target position of the target object in the third image;
determining a third sub-pixel array corresponding to the third target position in the sub-pixel array list, wherein the third sub-pixel array is different from the first sub-pixel array;
and closing other sub-pixel arrays except the third sub-pixel array, and updating the third image based on the third sub-pixel array.
5. An electronic device, comprising:
the acquisition module is used for acquiring a first target position of a target object in a first image, and the first image is generated by a pixel array of an emission sensor and a pixel array of a receiving sensor of the TOF sensor module in an opening state;
a determining module to determine a first sub-pixel array of an emitting sensor and a receiving sensor in a sub-pixel array list corresponding to the first target location, the sub-pixel array list including a plurality of sub-pixel arrays of the pixel array, the determining module to: determining a first receiving sub-pixel array which is required to be started by a receiving sensor corresponding to the first target position in a sub-pixel array list, and determining a first transmitting sub-pixel array corresponding to the first receiving sub-pixel array based on a pre-established corresponding relation between the receiving sub-pixel array and the transmitting sub-pixel array;
a closing module, configured to close other sub-pixel arrays except the first sub-pixel array, and update the first image based on the first sub-pixel array, where the updated first image includes a complete image of the target object, and the closing module is specifically configured to: generating a second image based on the first sub-pixel array, wherein the second image is a local image containing the target object in the first image; updating the position of the target object in the second image; if the target object is monitored to move out of the second image, starting other sub-pixel arrays except the first sub-pixel array; generating a third image based on the sub-pixel array list.
6. The electronic device of claim 5, wherein the sub-pixel array list comprises: the TOF sensor module comprises a first sub-pixel array list and a second sub-pixel array list, wherein the first sub-pixel array list comprises a plurality of sub-pixel arrays obtained by dividing pixel arrays of receiving sensors of the TOF sensor module, and the second sub-pixel array list comprises a plurality of sub-pixel arrays obtained by dividing pixel arrays of transmitting sensors of the TOF sensor module;
the determining module includes:
a first determining sub-module, configured to determine a first receiving sub-pixel array corresponding to the first target location in the first sub-pixel array list, where the first receiving sub-pixel array is configured to receive a light pulse signal reflected by the target object;
a second determining sub-module, configured to determine a first emitting sub-pixel array in the second sub-pixel array list, where the first emitting sub-pixel array corresponds to the first receiving sub-pixel array, and the first emitting sub-pixel array is used to emit the light pulse signal to the target object.
7. The electronic device of claim 6, wherein the shutdown module comprises:
a turn-off sub-module for turning off the first receiving sub-pixel array and other sub-pixel arrays except the first transmitting sub-pixel array;
an update sub-module to update the first image based on the first receive sub-pixel array and the first transmit sub-pixel array.
8. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the method according to any one of claims 1 to 4.
9. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN201911368178.0A 2019-12-26 2019-12-26 Imaging processing method and electronic device Active CN111090104B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911368178.0A CN111090104B (en) 2019-12-26 2019-12-26 Imaging processing method and electronic device
PCT/CN2020/139208 WO2021129776A1 (en) 2019-12-26 2020-12-25 Imaging processing method, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911368178.0A CN111090104B (en) 2019-12-26 2019-12-26 Imaging processing method and electronic device

Publications (2)

Publication Number Publication Date
CN111090104A CN111090104A (en) 2020-05-01
CN111090104B true CN111090104B (en) 2022-11-11

Family

ID=70397711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911368178.0A Active CN111090104B (en) 2019-12-26 2019-12-26 Imaging processing method and electronic device

Country Status (2)

Country Link
CN (1) CN111090104B (en)
WO (1) WO2021129776A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111090104B (en) * 2019-12-26 2022-11-11 维沃移动通信有限公司 Imaging processing method and electronic device
CN113890962B (en) * 2020-07-02 2023-03-24 华为技术有限公司 Image sensor, 3D camera and control method of image sensor
CN112038361A (en) * 2020-09-08 2020-12-04 上海大芯半导体有限公司 Distance sensor pixel array structure, distance sensor and working method
CN113687386A (en) * 2021-07-07 2021-11-23 曜芯科技有限公司 Imaging system and related electronic device

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2038815B1 (en) * 2006-07-06 2012-05-16 Microsoft Corporation Method and system for fast calibration of three-dimensional (3d) sensors
US8320621B2 (en) * 2009-12-21 2012-11-27 Microsoft Corporation Depth projector system with integrated VCSEL array
EP2395369A1 (en) * 2010-06-09 2011-12-14 Thomson Licensing Time-of-flight imager.
US9350928B2 (en) * 2012-05-02 2016-05-24 Semiconductor Components Industries, Llc Image data compression using stacked-chip image sensors
US10063757B2 (en) * 2012-11-21 2018-08-28 Infineon Technologies Ag Dynamic conservation of imaging power
US20150377964A1 (en) * 2014-06-30 2015-12-31 Texas Instruments Incorporated Programmable test pattern for a pixel array
EP3391076A1 (en) * 2015-12-20 2018-10-24 Apple Inc. Light detection and ranging sensor
US10841491B2 (en) * 2016-03-16 2020-11-17 Analog Devices, Inc. Reducing power consumption for time-of-flight depth imaging
KR102470223B1 (en) * 2016-04-27 2022-11-23 주식회사 디비하이텍 Image sensor and method of sensing the same
CN105933532A (en) * 2016-06-06 2016-09-07 广东欧珀移动通信有限公司 Image processing method and device, and mobile terminal
CN107222664B (en) * 2017-05-03 2020-03-06 Oppo广东移动通信有限公司 Camera module and electronic device
US10750104B2 (en) * 2017-08-15 2020-08-18 Sony Semiconductor Solutions Corporation Solid-state imaging apparatus and driving method thereof
CN108900750B (en) * 2018-07-19 2020-08-28 维沃移动通信有限公司 Image sensor and mobile terminal
CN109031332A (en) * 2018-08-07 2018-12-18 上海炬佑智能科技有限公司 Flight time distance measuring sensor and its control method
CN109714583B (en) * 2019-01-22 2022-07-19 京东方科技集团股份有限公司 Augmented reality display method and augmented reality display system
CN109819238B (en) * 2019-02-22 2021-06-22 北京旷视科技有限公司 Working frequency adjusting method and device of TOF image acquisition module and electronic system
CN110501691B (en) * 2019-08-13 2022-03-08 Oppo广东移动通信有限公司 Noise filtering method of TOF module, TOF module and device
CN111090104B (en) * 2019-12-26 2022-11-11 维沃移动通信有限公司 Imaging processing method and electronic device

Also Published As

Publication number Publication date
CN111090104A (en) 2020-05-01
WO2021129776A1 (en) 2021-07-01

Similar Documents

Publication Publication Date Title
CN111090104B (en) Imaging processing method and electronic device
CN109067418B (en) Antenna control method and mobile terminal
US11316967B2 (en) Screen on and off state control method and mobile terminal
CN110300267B (en) Photographing method and terminal equipment
AU2020263183B2 (en) Parameter Obtaining Method and Terminal Device
CN111049510B (en) Touch key, control method and electronic equipment
CN111010718B (en) Method for quickly searching network, configuration method and electronic equipment
CN109257505B (en) Screen control method and mobile terminal
CN111246073B (en) Imaging device, method and electronic equipment
CN107730460B (en) Image processing method and mobile terminal
CN109639863B (en) Voice processing method and device
CN107704182B (en) Code scanning method and mobile terminal
CN110898424B (en) Display control method and electronic equipment
CN111399792B (en) Content sharing method and electronic equipment
CN107782250B (en) Depth information measuring method and device and mobile terminal
CN109343811B (en) Display adjustment method and terminal equipment
CN109472825B (en) Object searching method and terminal equipment
CN109117037B (en) Image processing method and terminal equipment
CN108449490B (en) Terminal control method and terminal
CN110740265B (en) Image processing method and terminal equipment
CN108955641B (en) Depth camera shooting method, depth camera shooting equipment and mobile terminal
CN108550182A (en) A kind of three-dimensional modeling method and terminal
CN109194943B (en) Image processing method and terminal equipment
CN110086916B (en) Photographing method and terminal
CN111107271B (en) Shooting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant