WO2021102857A1 - Procédé, appareil et dispositif de traitement d'image et support de stockage - Google Patents

Procédé, appareil et dispositif de traitement d'image et support de stockage Download PDF

Info

Publication number
WO2021102857A1
WO2021102857A1 PCT/CN2019/121768 CN2019121768W WO2021102857A1 WO 2021102857 A1 WO2021102857 A1 WO 2021102857A1 CN 2019121768 W CN2019121768 W CN 2019121768W WO 2021102857 A1 WO2021102857 A1 WO 2021102857A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image data
global
processing device
partial
Prior art date
Application number
PCT/CN2019/121768
Other languages
English (en)
Chinese (zh)
Inventor
周长波
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2019/121768 priority Critical patent/WO2021102857A1/fr
Priority to CN201980040204.9A priority patent/CN112313944A/zh
Publication of WO2021102857A1 publication Critical patent/WO2021102857A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/75Circuitry for providing, modifying or processing image signals from the pixel array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled

Definitions

  • This application relates to the field of image processing technology, and in particular to an image processing method, device, device, and storage medium.
  • Image sensors have developed rapidly in recent years and are widely used in cameras, cameras, and various mobile phones with camera functions, drones, etc. At present, when the image sensor is reading image data, one method is to adopt a full-pixel fast readout method. The image obtained by this method is clear, but at the same time there are problems of high data transmission and processing pressure and high power consumption; One is to take a down-sampling readout method, or a readout method that discards some pixels. This method reduces the pressure of data transmission and processing, and reduces the power consumption of image processing. However, due to insufficient raw data read, the result is The image is not clear enough, and the image quality is difficult to meet the requirements.
  • the present application provides an image processing method, device, equipment, and storage medium, so as to achieve both power consumption and image quality of image processing.
  • this application provides an image processing method, which includes:
  • the first sampling rate is higher than the second sampling rate.
  • this application also provides an image processing method, the method including:
  • Image processing is performed on the local image and the global image to generate a target image.
  • the present application also provides an image sensor, the image sensor including a memory and a processor;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
  • the first sampling rate is higher than the second sampling rate.
  • the present application also provides an image processing device, the image processing device including a memory and a processor;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
  • Image processing is performed on the local image and the global image to generate a target image.
  • the present application also provides an image processing device, including an image sensor and an image processing device, wherein:
  • the image sensor is used to determine key area information in the image sensor
  • the image sensor is configured to output the local image data corresponding to the key area in the image sensor at a first sampling rate according to the key area information, and output the global image data of the image sensor at a second sampling rate, the The first sampling rate is higher than the second sampling rate;
  • the image sensor is used to send the global image data and the local image data to the image processing device;
  • the image processing device is used to receive the local image data and the global image data
  • the image processing device is configured to generate corresponding local images and global images according to the local image data and the global image data;
  • the image processing device is used to perform image processing on the partial image and the global image to generate a target image.
  • the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor implements the image processing method described above .
  • the image processing method, image sensor, image processing device, image processing equipment, and computer-readable storage medium disclosed in the present application determine the key region information in the image sensor, and output the image sensor at the first sampling rate according to the key region information.
  • the local image data corresponding to the key area, and the global image data of the image sensor output at the second sampling rate, where the first sampling rate is higher than the second sampling rate, that is, the local image data is output at a high sampling rate, thereby realizing the key area
  • the clear image ensures the image quality.
  • the global image data is output at a low sampling rate, which solves the problem of heavy data transmission and processing pressure, and reduces the power consumption of image processing.
  • Fig. 1 is a schematic diagram of an image processing method in the prior art
  • FIG. 2 is a schematic diagram of another image processing method in the prior art
  • FIG. 3 is a schematic block diagram of an image processing device provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of steps of an image processing method provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of steps of another image processing method provided by an embodiment of the present application.
  • Fig. 6 is a schematic block diagram of an image sensor provided by an embodiment of the present application.
  • Fig. 7 is a schematic block diagram of an image processing device provided by an embodiment of the present application.
  • an image processing method is to add pixels.
  • the original low-resolution image is processed by adding pixels to obtain a processed high-resolution image. image.
  • this kind of image processing method obtains high-resolution images with good effect, it has high requirements on the image sensor, and the processing speed of the general image sensor is difficult to meet the requirements.
  • Another image processing method is digital magnification.
  • the original low-resolution image is digitally magnified to obtain a processed high-resolution image.
  • this image processing method does not have that high requirement on the image sensor, the image obtained is of poor quality and not clear enough.
  • the embodiments of the present application provide an image processing method, image sensor, image processing device, image processing device, and computer-readable storage medium, which are used to achieve both power consumption and image quality of image processing.
  • FIG. 3 is a schematic block diagram of an image processing device according to an embodiment of the application.
  • the image processing device 100 may include an image sensor 110 and an image processing device 120, and the image sensor 110 and the image processing device 120 are in communication connection.
  • the image sensor 110 can be applied to cameras, cameras, and various mobile phones with camera functions, drones, etc. It should be noted that the embodiment of the present application is not limited to this.
  • the image processing device 120 may include an image processor. Similarly, the image processing device 120 may be applied to cameras, cameras, and various mobile phones with camera functions, drones, and the like.
  • the image sensor 110 determines the key area information in the image sensor, and according to the key area information, outputs the local image data corresponding to the key area in the image sensor at the first sampling rate, and outputs the global image data of the image sensor at the second sampling rate , Where the first sampling rate is higher than the second sampling rate, that is, local image data is output at a high sampling rate, global image data is output at a low sampling rate, and the global image data and global image data are sent to the image processing device 120 .
  • the first sampling rate and the second sampling rate can be flexibly set according to actual conditions, such as image quality requirements, network bandwidth and other conditions, and there is no specific limitation here.
  • the image processing device 120 obtains the local image data and the global image data sent by the image sensor 110, and generates corresponding local and global images according to the local image data and the global image data, and then combines the local image and the global image data.
  • the global image undergoes image processing to generate the target image.
  • the image processing device 120 determines the key area of the image to be processed, acquires key area information corresponding to the key area, and sends the key area information to the image sensor 110, and the image sensor 110 receives all the information sent by the image processing device 120. Describe the key area information.
  • the key area information includes coordinate information of pixels in the key area.
  • the image processing device 120 determining the key area of the image to be processed includes:
  • the key area is determined.
  • the image to be processed may contain multiple objects, such as people, buildings, pets, etc.
  • the image processing device 120 performs image recognition processing on the image to be processed to obtain the target object, including:
  • the target object is determined from the at least one object.
  • setting a corresponding priority for each object in advance, and determining the target object from the at least one object by the image processing apparatus 120 includes:
  • the object with the highest priority among the at least one object is determined as the target object.
  • the image processing device 120 determining the key region according to the target object includes:
  • the first area where the target object is located in the image to be processed and the adjacent area of the first area are determined as the key area.
  • the image processing device 120 determining the key region according to the target object includes:
  • the second area to which the target object is expected to move is determined, and the second area is determined as the Critical area.
  • the image sensor 110 uses a full-pixel output mode to output partial image data.
  • the image sensor 110 uses at least twice the down-sampling method to output global image data.
  • the image sensor 110 is configured with multiple unused physical links, and the global image data and the partial image data are respectively sent to the image processing device 120 based on different physical links.
  • the image sensor 110 asynchronously sends global image data and local image data to the image processing device 120 based on the same physical link.
  • a first label corresponding to the global image data is configured, and a second label corresponding to the local image data is configured.
  • the first label and the second label are different, and are used to identify the global image data and the local image data, respectively.
  • the image sensor 110 sends the global image data carrying the first label and the partial image data carrying the second label to the image processing device 120.
  • the image processing device 120 when performing image processing, performs image enlargement processing on the global image to generate an enlarged image; and performs image fusion processing on the enlarged image and the partial image to generate the target image.
  • the performing image fusion processing on the enlarged image and the partial image to generate the target image includes:
  • the partial image is merged in the key region to generate the target image.
  • a direct implementation manner is that the image processing device 120 replaces the image in the key region in the enlarged image with the partial image to generate the target image.
  • the image processing device 120 fuses the image in the key region in the enlarged image with the partial image according to corresponding weights to generate the target image.
  • mapping relationship between the preset weight and the position of the pixel point for example, the closer the preset pixel point is to the center position of the partial image, the larger the corresponding weight.
  • the fusing the image in the key region in the enlarged image with the partial image according to corresponding weights to generate the target image includes:
  • each pixel in the partial image and the pixel in the corresponding position in the image in the key region are weighted and calculated to generate the target image.
  • the image processing method provided by the embodiment of the present application will be introduced in detail based on the image processing device, the image sensor in the image processing device, and the image processing device in the image processing device. It should be understood that the image processing device in FIG. 3 does not constitute a limitation on the application scenario of the image processing method.
  • FIG. 4 is a schematic flowchart of an image processing method provided by an embodiment of the present application. This method can be used in any of the image sensors provided in the foregoing embodiments to achieve the effect of taking into account both the power consumption of image processing and the image quality.
  • the image processing method specifically includes steps S101 to S102.
  • the image sensor when the image sensor outputs image data, it not only outputs the full range of image data in the image sensor, but first determines the key area information in the image sensor.
  • the key area information includes but is not limited to the coordinate information of the pixels of the key area.
  • the determining the key area information in the image sensor includes:
  • the image processing device determines the key area of the image to be processed, and obtains the key area information corresponding to the key area.
  • the image sensor is in communication connection with an image processing device, such as an image processor, and when performing image processing on the image to be processed, the image processing device determines the key area of the image to be processed. For example, if the image to be processed is an image of a person, the image processing device determines that the area occupied by the person in the image to be processed is a key area of the image to be processed. In addition, the image processing device obtains key area information corresponding to the determined key area, for example, obtains coordinate information of pixels in the key area. After that, the image processing device sends the acquired key area information to the related image sensor, and the image sensor receives the key area information sent by the image processing device.
  • an image processing device such as an image processor
  • the first sampling rate is higher than the second sampling rate.
  • the image sensor When the image sensor is processing image data, it outputs two different image data.
  • One of the image data is the local image data corresponding to the key area in the image sensor, and the other image data is the global image data of the image sensor.
  • the image sensor outputs the local image data corresponding to the key area in the image sensor at a correspondingly higher sampling rate according to the determined key area information, and outputs the global image data of the image sensor at a different sampling rate.
  • the sampling rate corresponding to the local image data is referred to as the first sampling rate
  • the sampling rate corresponding to the global image data is referred to as the second sampling rate, where the first sampling rate is higher than the second sampling rate.
  • the outputting the partial image data corresponding to the key area in the image sensor at the first sampling rate includes:
  • the partial image data is output by adopting a full-pixel output mode.
  • the image sensor adopts a full-pixel output mode to output partial image data corresponding to the key area. It should be noted that in other embodiments, the image sensor may also adopt a jump point output mode to output partial image data corresponding to the key area.
  • the outputting the global image data of the image sensor at the second sampling rate includes:
  • the global image data is output by adopting at least twice the down-sampling mode.
  • the image sensor uses a down-sampling output method to output global image data, optionally, at least twice down-sampling is used.
  • Mode output global image data, that is, when outputting global image data, at least one pixel is skipped and the next pixel is output.
  • the downsampling multiple can be flexibly set according to actual conditions, and this embodiment does not limit it. Compared with the full-pixel output mode, the data transmission and processing pressure is reduced.
  • the method further includes:
  • the global image data and the local image data are sent to an image processing device, so that the image processing device generates a target image according to the global image data and the local image data.
  • the image sensor sends the output global image data and local image data to the image processing device that is in communication with it. It should be noted that the image sensor can send global image data and partial image data to the image processing device in different ways.
  • the sending the global image data and the partial image data to an image processing device includes:
  • the global image data and the partial image data are respectively sent to the image processing device based on different physical links.
  • the image sensor sends the global image data and the local image data to the image processing device based on different physical links.
  • the processing device receives global image data and local image data through different physical links, respectively.
  • the sending the global image data and the partial image data to an image processing device includes:
  • the global image data and the local image data are asynchronously sent to the image processing device based on the same physical link.
  • the image sensor asynchronously sends the global image data and the local image data to the image processing device based on the same physical link.
  • the image sensor can also asynchronously send global image data and local image data to the image processing device based on different physical links.
  • the method before sending the global image data and the local image data to the image processing device, the method further includes:
  • the sending the global image data and the partial image data to an image processing device includes:
  • the global image data carrying the first label and the partial image data carrying the second label are sent to the image processing device.
  • the label corresponding to the global image data is referred to as the first label
  • the label corresponding to the local image data is referred to as the second label.
  • the image sensor sends the global image data carrying the first label and the local image data carrying the second label to the image processing device.
  • the image processing device receives the global image data carrying the first label and the local image data carrying the second label, the global image data and the local image data can be identified according to the labels.
  • the image processing device After the image processing device receives the global image data and local image data, the image processing device generates a full-range low-resolution global image and a key area high-resolution local image according to the received global image data and local image data. Then perform image fusion processing on the global image and the local image to generate the corresponding target image.
  • the generated target image is a full-range high-resolution image with clearer key areas.
  • the foregoing embodiment uses the image sensor to output the local image data corresponding to the key area in the image sensor at the first sampling rate, and output the global image data of the image sensor at the second sampling rate, where the first sampling rate is higher than the second sampling rate, That is to say, local image data is output at a high sampling rate, so as to achieve clear images in key areas and ensure image quality.
  • global image data is output at a low sampling rate, which solves the problem of data transmission and processing pressure and reduces image processing Power consumption.
  • FIG. 5 is a schematic flowchart of another image processing method provided by an embodiment of the present application. This method can be used in any of the image processing apparatuses provided in the foregoing embodiments to achieve the effect of taking into account both the power consumption of image processing and the image quality.
  • the image processing method specifically includes steps S201 to S203.
  • the image processing device includes but is not limited to an image processor, and the image processing device is in communication connection with the image sensor.
  • the image processing device When performing image processing, not only acquires one channel of image data in the entire range, but also acquires two channels of image data, local image data and global image data.
  • the local image data is the image data corresponding to the key area in the image sensor
  • the global image data is the full range of image data in the image sensor
  • the sampling rate of the local image data is higher than the sampling rate of the global image data.
  • the image processing method further includes:
  • the acquiring local image data and global image data includes:
  • the image processing device determines the key area of the image to be processed. For example, if the image to be processed is an image of a person, the image processing device determines the key area of the image to be processed according to the area where the person is located in the image to be processed.
  • the determining the key area of the image to be processed includes:
  • the key area is determined.
  • the image processing device determines the key area of the image to be processed
  • the image processing device performs image recognition processing on the image to be processed to obtain target objects that need to be clearly displayed, such as people, flowers, pets, and so on.
  • target objects such as people, flowers, pets, and so on.
  • the area occupied by the target object is determined as the key area.
  • the performing image recognition processing on the to-be-processed image to obtain a target object includes:
  • the target object is determined from the at least one object.
  • the image processing device performs image recognition processing on the image to be processed, and obtains one or more objects contained in the image to be processed. Among the one or more objects, the target object that the user needs to display clearly is determined.
  • the determining the target object from the at least one object includes:
  • the object with the highest priority among the at least one object is determined as the target object.
  • the priority of each object is preset, and the priority of the object that the user most needs to be clearly displayed is set to be the highest.
  • the priority setting can be set in advance or customized by the user. The specific method is not limited here.
  • the image processing device When the image processing device acquires an object in the image to be processed, it directly determines the object as the target object.
  • the image processing apparatus acquires multiple objects in the image to be processed, the object with the highest priority is determined as the target object according to the preset priority of each object.
  • the determining the key area according to the target object includes:
  • the first area where the target object is located in the image to be processed and the adjacent area of the first area are determined as the key area.
  • the target object After determining the target object, a direct way is to determine the area where the target object is located in the image to be processed as a key area. In practical applications, many times the target object will not stay still and may move. In order to ensure the accuracy of the key area, in one embodiment, the target object is positioned at the first position in the image to be processed. The area and the adjacent area of the first area are determined as the key area, that is, the surrounding area of the target object is also determined as the range of the key area.
  • the determining the key area according to the target object includes:
  • the second area to which the target object is expected to move is determined, and the second area is determined as the Critical area.
  • the motion information of the target object is acquired, where the motion information includes, but is not limited to, the motion speed, the motion direction, and so on.
  • the image processing device determines the second area to which the target object is expected to move according to the movement information of the target object and the first area where the target object is currently located, and determines the second area as a key area.
  • the method for determining the key area is not limited to the above-listed ones, and may also include other methods, which are not limited here.
  • the image processing device determines the key area, and obtains the key area information corresponding to the determined key area, for example, obtains the coordinate information of the pixels of the key area. After that, the image processing device sends the acquired key area information to the related image sensor, and the image sensor receives the key area information sent by the image processing device. According to the key area information, the image sensor outputs the local image data corresponding to the key area in the image sensor at a correspondingly higher first sampling rate, and outputs the global image data of the image sensor at a second sampling rate lower than the first sampling rate , Send the local image data and global image data to the image processing device. The image processing device receives local image data and global image data sent by the image sensor.
  • S202 According to the local image data and the global image data, generate a corresponding local image and a global image.
  • the image processing device After obtaining the local image data and the global image data, the image processing device generates the corresponding local image according to the local image data, and generates the corresponding global image according to the global image data.
  • the global image is a full-range low-resolution image
  • the local image is a high-resolution image of a key area.
  • S203 Perform image processing on the local image and the global image to generate a target image.
  • image processing is performed on the local image and the global image to generate the corresponding target image.
  • the generated target image is an image with a clear key area and a full range of high resolution.
  • the image processing device performs image enlargement processing on the global image to generate an enlarged image corresponding to the global image, and by performing image enlargement processing on the global image, the generated enlarged image has a higher resolution than the global image. Then the generated enlarged image and the partial image are subjected to image fusion processing to generate the target image.
  • a key region in the enlarged image is determined, and the key region is the partial image corresponding to the mapping position in the enlarged image. For example, according to the key area information, determine the key area in the enlarged image. After that, the local image is merged in the key area of the enlarged image to generate the target image.
  • the fusing the partial images in the key region to generate the target image includes:
  • the image in the key region in the enlarged image is replaced with the partial image to generate the target image.
  • the image in the key region in the enlarged image is directly replaced with a partial image to generate the target image, which is simple, quick, and very efficient.
  • the fusing the partial images in the key region to generate the target image includes:
  • the image in the key region of the enlarged image and the partial image are fused according to corresponding weights to generate the target image.
  • the image and the partial image in the key area of the enlarged image are weighted accordingly. After fusion, the target image is born, so that the boundary transition of the partial image fusion to the enlarged image is natural.
  • the mapping relationship between the pixel point position of the partial image and the corresponding weight is preset, and the pixel points at different positions in the partial image have different corresponding weights. For example, the closer the pixel point is to the center of the partial image, the greater the corresponding weight.
  • the fusing the image in the key region in the enlarged image with the partial image according to corresponding weights to generate the target image includes:
  • each pixel in the partial image and the pixel in the corresponding position in the image in the key region are weighted and calculated to generate the target image.
  • each pixel in the partial image and the pixel at the corresponding position in the image in the key region are weighted to generate the target image.
  • the closer the pixel to the center of the partial image, the greater the corresponding weight, the farther away the pixel from the center of the partial image, the smaller the corresponding weight, the partial image and the enlarged image The target image generated by fusion will not appear as if two images are collaged.
  • the fusion of the partial image and the enlarged image is not limited to the methods listed above, and may also include other methods, which are not limited here.
  • the local image data and the global image data are acquired through the image processing device, wherein the sampling rate of the local image data is higher than the sampling rate of the global image data, and the corresponding local image and global image are generated according to the local image data and the global image data.
  • the global image data corresponds to a low sampling rate, which reduces the pressure of data transmission and processing. , The power consumption of image processing is reduced.
  • FIG. 6 is a schematic block diagram of an image sensor provided by an embodiment of the present application.
  • the image sensor 600 includes a processor 610 and a memory 611.
  • the processor 610 may be a micro-controller unit (MCU), a central processing unit (Central Processing Unit, CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 611 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the processor is used to run a computer program stored in a memory, and implement the following steps when executing the computer program:
  • the first sampling rate is higher than the second sampling rate.
  • the processor realizes the output of the local image data corresponding to the key area in the image sensor at the first sampling rate and the output of the global image data of the image sensor at the second sampling rate, Also achieve:
  • the global image data and the local image data are sent to an image processing device, so that the image processing device generates a target image according to the global image data and the local image data.
  • the processor when the processor implements the sending of the global image data and the partial image data to the image processing device, it specifically implements:
  • the global image data and the partial image data are respectively sent to the image processing device based on different physical links.
  • the processor when the processor implements the sending of the global image data and the partial image data to the image processing device, it specifically implements:
  • the global image data and the local image data are asynchronously sent to the image processing device based on the same physical link.
  • the processor before implementing the sending of the global image data and the partial image data to the image processing apparatus, the processor further implements:
  • the processor implements the sending of the global image data and the partial image data to the image processing device, it specifically implements:
  • the global image data carrying the first label and the partial image data carrying the second label are sent to the image processing device.
  • the processor when the processor implements the determination of the key region information in the image sensor, it specifically implements:
  • the image processing device determines the key area of the image to be processed, and obtains the key area information corresponding to the key area.
  • the key area information includes coordinate information of pixels in the key area.
  • the processor when the processor implements the output of the partial image data corresponding to the key area in the image sensor at the first sampling rate, it specifically implements:
  • the partial image data is output by adopting a full-pixel output mode.
  • the processor when the processor implements the output of the global image data of the image sensor at the second sampling rate, it specifically implements:
  • the global image data is output by adopting at least twice the down-sampling mode.
  • FIG. 7 is a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
  • the image processing apparatus 700 includes a processor 710 and a memory 711, and the processor 710 and the memory 711 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 710 may be a micro-controller unit (MCU), a central processing unit (Central Processing Unit, CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 711 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the processor is used to run a computer program stored in a memory, and implement the following steps when executing the computer program:
  • Image processing is performed on the local image and the global image to generate a target image.
  • the processor when the processor implements the image processing of the partial image and the global image to generate a target image, it specifically implements:
  • the processor when the processor implements the image fusion processing of the enlarged image and the partial image to generate the target image, it specifically implements:
  • the partial image is merged in the key region to generate the target image.
  • the processor when the processor realizes the fusion of the partial images in the key region to generate the target image, it specifically realizes:
  • the image in the key area in the enlarged image is replaced with the partial image to generate the target image.
  • the processor when the processor realizes the fusion of the partial images in the key region to generate the target image, it specifically realizes:
  • the image in the key region of the enlarged image and the partial image are fused according to corresponding weights to generate the target image.
  • the specific implementation when the processor implements the fusion of the image in the key region in the enlarged image with the partial image according to corresponding weights to generate the target image, the specific implementation includes:
  • each pixel in the partial image and the pixel in the corresponding position in the image in the key region are weighted and calculated to generate the target image.
  • the processor when the processor executes the computer program, it also implements:
  • the processor when the processor implements the determination of the key region of the image to be processed, it specifically implements:
  • the key area is determined.
  • the processor when the processor implements the image recognition processing on the to-be-processed image to obtain the target object, it specifically implements:
  • the target object is determined from the at least one object.
  • the processor when the processor implements the determination of the target object from the at least one object, it specifically implements:
  • the object with the highest priority among the at least one object is determined as the target object.
  • the processor when the processor realizes the determination of the key region according to the target object, it specifically realizes:
  • the first area where the target object is located in the image to be processed and the adjacent area of the first area are determined as the key area.
  • the processor when the processor realizes the determination of the key region according to the target object, it specifically realizes:
  • the second area to which the target object is expected to move is determined, and the second area is determined as the Critical area.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the present application The steps of the image processing method provided by the embodiment.
  • the computer-readable storage medium may be the internal storage unit of the image sensor or the image processing device described in the foregoing embodiment, for example, the hard disk or memory of the image processing device.
  • the computer-readable storage medium may also be an external storage device of the image sensor or image processing device, such as a plug-in hard disk equipped on the image sensor or image processing device, or a smart memory card (Smart Media Card, SMC) , Secure Digital (SD) card, Flash Card (Flash Card), etc.
  • Smart Media Card Smart Media Card, SMC
  • SD Secure Digital
  • Flash Card Flash Card

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

Procédé de traitement d'image, capteur d'image, appareil de traitement d'image, dispositif de traitement d'image et support de stockage lisible par ordinateur. Le procédé comprend : la détermination d'informations de région clé dans un capteur d'image ; et en fonction des informations de région clé, la délivrance en sortie des données d'image partielle correspondant à une région clé dans le capteur d'image à un premier taux d'échantillonnage, et la délivrance en sortie des données d'image globale du capteur d'image à un second taux d'échantillonnage, le premier taux d'échantillonnage étant supérieur au second taux d'échantillonnage. La présente invention réduit la consommation d'énergie d'un traitement d'image tout en obtenant la garantie de qualité d'image.
PCT/CN2019/121768 2019-11-28 2019-11-28 Procédé, appareil et dispositif de traitement d'image et support de stockage WO2021102857A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/121768 WO2021102857A1 (fr) 2019-11-28 2019-11-28 Procédé, appareil et dispositif de traitement d'image et support de stockage
CN201980040204.9A CN112313944A (zh) 2019-11-28 2019-11-28 图像处理方法、装置、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/121768 WO2021102857A1 (fr) 2019-11-28 2019-11-28 Procédé, appareil et dispositif de traitement d'image et support de stockage

Publications (1)

Publication Number Publication Date
WO2021102857A1 true WO2021102857A1 (fr) 2021-06-03

Family

ID=74336573

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/121768 WO2021102857A1 (fr) 2019-11-28 2019-11-28 Procédé, appareil et dispositif de traitement d'image et support de stockage

Country Status (2)

Country Link
CN (1) CN112313944A (fr)
WO (1) WO2021102857A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761311A (zh) * 2022-11-03 2023-03-07 广东科力新材料有限公司 Pvc钙锌稳定剂的性能检测数据分析方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120288215A1 (en) * 2011-05-13 2012-11-15 Altek Corporation Image processing device and processing method thereof
CN103888689A (zh) * 2014-03-13 2014-06-25 北京智谷睿拓技术服务有限公司 图像采集方法及图像采集装置
CN103888679A (zh) * 2014-03-13 2014-06-25 北京智谷睿拓技术服务有限公司 图像采集方法及图像采集装置
CN106447677A (zh) * 2016-10-12 2017-02-22 广州视源电子科技股份有限公司 图像处理方法和装置
CN107153519A (zh) * 2017-04-28 2017-09-12 北京七鑫易维信息技术有限公司 图像传输方法、图像显示方法以及图像处理装置
CN110428366A (zh) * 2019-07-26 2019-11-08 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100527792C (zh) * 2006-02-07 2009-08-12 日本胜利株式会社 摄像方法以及摄像装置
CN104144295A (zh) * 2014-08-12 2014-11-12 北京智谷睿拓技术服务有限公司 成像控制方法和装置及成像设备
CN105450924B (zh) * 2014-09-30 2019-04-12 北京智谷技术服务有限公司 超分辨率图像的获取方法和装置
CN107516335A (zh) * 2017-08-14 2017-12-26 歌尔股份有限公司 虚拟现实的图形渲染方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120288215A1 (en) * 2011-05-13 2012-11-15 Altek Corporation Image processing device and processing method thereof
CN103888689A (zh) * 2014-03-13 2014-06-25 北京智谷睿拓技术服务有限公司 图像采集方法及图像采集装置
CN103888679A (zh) * 2014-03-13 2014-06-25 北京智谷睿拓技术服务有限公司 图像采集方法及图像采集装置
CN106447677A (zh) * 2016-10-12 2017-02-22 广州视源电子科技股份有限公司 图像处理方法和装置
CN107153519A (zh) * 2017-04-28 2017-09-12 北京七鑫易维信息技术有限公司 图像传输方法、图像显示方法以及图像处理装置
CN110428366A (zh) * 2019-07-26 2019-11-08 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761311A (zh) * 2022-11-03 2023-03-07 广东科力新材料有限公司 Pvc钙锌稳定剂的性能检测数据分析方法及系统

Also Published As

Publication number Publication date
CN112313944A (zh) 2021-02-02

Similar Documents

Publication Publication Date Title
US20230037167A1 (en) Digital photographing apparatus including a plurality of optical systems for acquiring images under different conditions and method of operating the same
US10740431B2 (en) Apparatus and method of five dimensional (5D) video stabilization with camera and gyroscope fusion
US9692959B2 (en) Image processing apparatus and method
EP3037963A1 (fr) Traduction d'instruction d'hôte d'ensemble d'instructions de caméra
US20190281230A1 (en) Processor, image processing device including same, and method for image processing
US10841460B2 (en) Frame synchronization method for image data, image signal processing apparatus, and terminal
JP6936018B2 (ja) 映像送信装置および映像受信装置
CN116055857B (zh) 拍照方法及电子设备
CN102209191A (zh) 执行图像信号处理的方法及用于执行图像信号处理的装置
EP3086224A1 (fr) Activation d'un sous-système de stockage de métadonnées
KR102186383B1 (ko) 이미지를 처리하는 전자장치 및 방법
WO2021102857A1 (fr) Procédé, appareil et dispositif de traitement d'image et support de stockage
US20110211087A1 (en) Method and apparatus providing for control of a content capturing device with a requesting device to thereby capture a desired content segment
EP3096233A1 (fr) Exécution d'une instruction dans un mécanisme de transport sur la base d'une architecture "get/set"
US10701286B2 (en) Image processing device, image processing system, and non-transitory storage medium
WO2022199594A1 (fr) Procédé de réalisation d'une vidéo à distance, et dispositif associé
CN115908120A (zh) 图像处理方法和电子设备
KR102534449B1 (ko) 이미지 처리 방법, 장치, 전자 장치 및 컴퓨터 판독 가능 저장 매체
CN114945019A (zh) 数据传输方法、装置及存储介质
CN114630152A (zh) 用于图像处理器的参数传输方法、装置及存储介质
CN116029951B (zh) 图像处理方法与电子设备
US11682100B2 (en) Dynamic allocation of system of chip resources for efficient signal processing
CN116631011B (zh) 手部姿态估计方法及电子设备
WO2024067428A1 (fr) Procédé de photographie à haute résolution et haute fréquence de trames et appareil de traitement d'image
CN117082295B (zh) 图像流处理方法、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19953772

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19953772

Country of ref document: EP

Kind code of ref document: A1