WO2020237542A1 - 一种图像处理方法及装置 - Google Patents

一种图像处理方法及装置 Download PDF

Info

Publication number
WO2020237542A1
WO2020237542A1 PCT/CN2019/089115 CN2019089115W WO2020237542A1 WO 2020237542 A1 WO2020237542 A1 WO 2020237542A1 CN 2019089115 W CN2019089115 W CN 2019089115W WO 2020237542 A1 WO2020237542 A1 WO 2020237542A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
vehicle
camera
information
preset
Prior art date
Application number
PCT/CN2019/089115
Other languages
English (en)
French (fr)
Inventor
王伟刚
周国中
欧进利
曾继平
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2019/089115 priority Critical patent/WO2020237542A1/zh
Priority to CN201980070008.6A priority patent/CN112889271B/zh
Publication of WO2020237542A1 publication Critical patent/WO2020237542A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to the field of image processing technology, in particular to an image processing method and device.
  • Existing cameras can not only implement simple monitoring functions, but also implement functions such as illegal capture and moving object tracking.
  • the present application provides an image processing method and device to solve the problems of high cost and large space occupation.
  • an image processing method is provided, which is applied to a road traffic monitoring scene including a camera.
  • the camera uses the image sensor in the camera to shoot at the first moment according to the first preset image parameters (including the first exposure time) to obtain the first image, and according to the second preset image parameters (including the first exposure time) A second exposure time with a different exposure time) is photographed at a second moment, so that the image sensor is used to acquire a second image.
  • the camera obtains information about the vehicle (included in the first image) and information about things outside the vehicle (included in the second image) according to the acquired first image and second image.
  • the camera in this application adopts an image sensor to obtain the first image by using the first exposure time, and can also obtain the second image by using the second exposure time, so that the target object (such as a vehicle) in the first image ) And the target shooting object (such as things outside the car) in the second image can have reasonable exposure times, so that both can be clearly presented.
  • the image sensor in the camera of the present application can obtain images with different exposure times.
  • the camera can also obtain information about objects in the image (such as vehicles and things outside the vehicle).
  • one camera in this application can perform the functions of multiple cameras in the prior art, which effectively reduces costs and saves deployment space compared with the prior art.
  • an image processing method uses a first image sensor in the camera to shoot at a first moment according to a first preset image parameter (including a first exposure time) to obtain a first image, and The second preset image parameter (including the second exposure time that is different from the first exposure time) is taken at the second moment by using the second image sensor in the camera to obtain the second image. Subsequently, the camera obtains information about the vehicle (included in the first image) and information about things outside the vehicle (included in the second image) based on the acquired first image and second image.
  • the camera of the present application includes multiple image sensors, and different image sensors can obtain images with different exposure times, so that the target object (such as a vehicle) in the first image and the target object (such as things outside the vehicle) in the second image Each can have a reasonable exposure time, so that all can be clearly presented.
  • one camera can complete the functions of multiple cameras in the prior art, which effectively reduces costs and saves deployment space compared with the prior art.
  • the above method of "the camera obtains vehicle information and information about things outside the vehicle based on the first image and the second image” is: the camera uses the first preset Set the encoding algorithm to encode the first image to obtain the encoded first image. After that, the camera detects whether there is a vehicle in the encoded first image; if there is a vehicle in the encoded first image, the camera obtains the Information; the camera uses the second preset encoding algorithm to encode the second image to obtain the encoded second image. After that, the camera detects whether there are things outside the vehicle in the encoded second image; if the encoded second image If there are things outside the car, the camera obtains information about things outside the car.
  • the camera After acquiring the first image and the second image, the camera can perform encoding, image detection and other processing on the acquired images, which effectively improves the accuracy of the information of the vehicle and the information of the things outside the vehicle.
  • the foregoing first image and the foregoing second image are obtained by a camera shooting the same shooting scene.
  • the above-mentioned method of “the camera obtains information about the vehicle and information about things outside the vehicle based on the first image and the second image” is: the camera uses a preset fusion algorithm to fuse the first image and the second image to generate the first image Three images.
  • the camera uses the third preset encoding algorithm to encode the third image to obtain the encoded third image; after obtaining the encoded third image, the camera detects whether there is a vehicle in the encoded third image And things outside the vehicle; if there are vehicles and things outside the vehicle in the encoded third image, the camera obtains information about the vehicle and things outside the vehicle.
  • the camera can fuse the first image and the second image to generate a high-quality third image. In this way, the camera performs encoding and image detection on the third image, and can accurately obtain information about the vehicle and information about things outside the vehicle.
  • the camera of the present application can shoot the same shooting scene to obtain the first image and the second image, and can also shoot different shooting scenes to obtain the first image and the second image, which is not limited in this application.
  • the camera of the present application can adopt different processing methods to acquire vehicle information and information about things outside the vehicle.
  • the first preset image parameter further includes at least one of the first frame rate, the first exposure compensation coefficient, the first gain, or the first shutter speed.
  • the above-mentioned second preset image parameter further includes at least one of a second frame rate, a second exposure compensation system, a second gain, or a second shutter speed.
  • the vehicle information includes a license plate number
  • things outside the vehicle include pedestrians, animals, non-motor vehicles other than the aforementioned vehicles, or non-motor vehicles other than the aforementioned vehicles.
  • the camera can display the vehicle information and the information of the things outside the vehicle on the configuration interface of the camera after acquiring the information of the vehicle and the things outside the vehicle.
  • Information or send vehicle information and information about things outside the vehicle to other devices/platforms (such as the server of a traffic violation processing center). In this way, law enforcement officers can complete corresponding processing (such as recording violations) based on vehicle information and information on things outside the vehicle.
  • a camera which can implement the functions in the first aspect, the second aspect, or any one of the foregoing possible implementation manners. These functions can be realized by hardware, or by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the camera may include an acquisition unit and a processing unit, and the acquisition unit and processing unit may perform the corresponding functions in the image processing method described in the first aspect and any one of its possible implementations.
  • the above-mentioned acquisition unit is configured to use the image sensor in the camera to capture the first image at the first moment according to the first preset image parameter, and the first preset image parameter includes the first exposure time
  • the image sensor is used for shooting at the second moment to obtain the second image.
  • the second preset image parameter includes a second exposure time, and the second exposure time is different from the first exposure time.
  • the above-mentioned processing unit is configured to obtain vehicle information and information about things outside the vehicle according to the first image and the second image acquired by the acquisition unit.
  • the first image contains the vehicle and the second image contains the outside matter.
  • a camera which can implement the functions in the first aspect, the second aspect, or any one of the foregoing possible implementation manners. These functions can be realized by hardware, or by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the camera may include a collection unit and a processing unit, and the collection unit and the processing unit can perform corresponding functions in the image processing method of the first aspect and any one of its possible implementations.
  • the above-mentioned acquisition unit is configured to use the first image sensor in the camera to shoot at the first moment according to the first preset image parameter to obtain the first image, and the first preset image parameter includes the first exposure time
  • the second image sensor in the camera is used for shooting at the second moment to obtain the second image.
  • the second image sensor is different from the first image sensor, and the second preset image parameter includes the first Two exposure time, the second exposure time is different from the first exposure time.
  • the above-mentioned processing unit is configured to obtain vehicle information and information about things outside the vehicle according to the first image and the second image acquired by the acquisition unit.
  • the first image contains the vehicle and the second image contains the outside matter.
  • the processing unit is specifically configured to: use a first preset encoding algorithm to encode the first image to obtain the encoded first image; If there is a vehicle in the first image after encoding; if there is a vehicle in the encoded first image, obtain vehicle information; use the second preset encoding algorithm to encode the second image to obtain the encoded second image Image; detect whether there are things outside the vehicle in the encoded second image; in the case of things outside the vehicle in the encoded second image, obtain information about things outside the vehicle.
  • the camera obtains the facial features of the pedestrian.
  • the first image and the second image are obtained by shooting the same shooting scene by the foregoing acquisition unit.
  • the aforementioned processing unit is specifically configured to: use a preset fusion algorithm to fuse the first image and the second image to generate a third image; use a third preset coding algorithm to encode the third image to obtain the encoded The third image; detect whether there are vehicles and things outside the vehicle in the encoded third image; in the case of vehicles and things outside the vehicle in the encoded third image, obtain vehicle information and information about things outside the vehicle.
  • the first preset image parameter further includes at least one of the first frame rate, the first exposure compensation coefficient, the first gain, or the first shutter speed.
  • the above-mentioned second preset image parameter further includes at least one of a second frame rate, a second exposure compensation system, a second gain, or a second shutter speed.
  • the information of the vehicle includes a license plate number
  • the things outside the vehicle include pedestrians, animals, non-motor vehicles other than the vehicle, or other than the vehicle. At least one of the drivers of non-motorized vehicles.
  • vehicle information may also include vehicle brand, body color, vehicle model, etc. If the things outside the vehicle include people, the information about the things outside the vehicle may include facial features, gender, age group, clothes color, and so on.
  • a video camera in a fifth aspect, has one or more processors, and a memory; the memory is coupled with the one or more processors, and the memory is used to store computer program code, the computer program code including instruction.
  • the one or more processors execute the instructions, the camera implements the image processing methods described in the first aspect, the second aspect, or the foregoing various possible implementation manners.
  • the camera further includes a communication interface for executing the steps of sending and receiving data, signaling, or information in the image processing method described in the first aspect, the second aspect, or the foregoing various possible implementation manners, For example, sending information about the vehicle and information about things outside the vehicle.
  • a computer-readable storage medium stores instructions; when the instructions run on the camera, the camera executes the above-mentioned first aspect, second aspect, or various possible Implement the image processing method described in the mode.
  • a computer program product is also provided.
  • the computer program product includes instructions. When the instructions run on the camera, the camera executes the image described in the first aspect, the second aspect, or the foregoing various possible implementations. Approach.
  • a system chip is also provided, which is applied in a camera, the camera includes at least one processor, and related instructions are executed in the at least one processor, so that the camera executes the above-mentioned first aspect , The image processing method described in the second aspect or various possible implementation manners above.
  • the camera may collect images in real time to obtain the first image and the second image, or may collect images when it is determined that the vehicle speed exceeds a preset value (I.e., capture images when the vehicle is speeding) to obtain the first image and the second image.
  • a preset value I.e., capture images when the vehicle is speeding
  • the camera of the present application can be applied to road traffic scenes such as intersections and community gates.
  • a camera can capture vehicles and things outside the vehicle, and the image definition is high. After the camera performs image detection, it can more accurately obtain the information of the vehicle and the information of things outside the vehicle. It has a powerful deterrent effect on pedestrians or drivers who do not abide by traffic rules, and improves the safety of pedestrians or things outside the vehicle. For law enforcement officers, providing more accurate vehicle information and information about things outside the vehicle will help detect cases.
  • the aforementioned vehicles and things outside the vehicle can also be replaced with other objects, which are not limited in this application.
  • the type of object obtained by the camera mainly depends on the application scenario.
  • the above-mentioned computer instructions may be stored in whole or in part on the first computer storage medium.
  • the first computer storage medium may be packaged with the processor of the camera or separately packaged with the processor of the camera.
  • the application is not limited.
  • Figure 1 is a schematic diagram of a camera in an embodiment of the present invention.
  • Figure 2 is a schematic diagram of the deployment of cameras in practical applications
  • Figure 3 is a schematic diagram of the hardware structure of the camera in an embodiment of the present invention.
  • FIG. 4 is a schematic diagram 1 of the flow of an image processing method in an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a configuration interface of the first preset image parameter in an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a configuration interface of a second preset image parameter in an embodiment of the present invention.
  • FIG. 7 is a second schematic diagram of the flow of the image processing method in the embodiment of the present invention.
  • FIG. 8 is a third schematic flowchart of an image processing method in an embodiment of the present invention.
  • FIG. 9 is a fourth schematic flowchart of an image processing method in an embodiment of the present invention.
  • Fig. 10 is a schematic structural diagram of a camera in an embodiment of the present invention.
  • words such as “exemplary” or “for example” are used to represent examples, illustrations, or illustrations. Any embodiment or design solution described as “exemplary” or “for example” in the embodiments of the present invention should not be construed as being more preferable or advantageous than other embodiments or design solutions. To be precise, words such as “exemplary” or “for example” are used to present related concepts in a specific manner.
  • the embodiment of the present invention uses the same camera to shoot two (or more than two) images with different image parameters, and the target object in each image has a good shooting effect.
  • the camera captures the first image and the second image.
  • the camera also uses different image parameters to detect the captured images to obtain information about the target subject.
  • the camera detects the first image and the second image to obtain information about the vehicle and information about things outside the vehicle.
  • the camera in the event of a traffic accident such as a collision or scratching between a vehicle and a pedestrian, the camera can either clearly capture the injured person or the clear vehicle that caused the accident.
  • the camera may shoot the same shooting scene to obtain two (or more) images, or shoot different shooting scenes to obtain two (or more) images.
  • the camera can further merge the two (or more) images into one image. Since the two (or more) images are captured by the camera in the same shooting scene, fusing two (or more) images into one image can further improve the clarity of the image and ensure the integrity of the image .
  • the embodiment of the present invention uses only one camera, which facilitates the comparison of the two images and the completion of image fusion.
  • multiple cameras need to be used for shooting. Since the physical positions of multiple cameras are usually different, even if multiple cameras are adjusted to the same shooting angle and image ratio, the shooting scene cannot be the same.
  • the camera can shoot images with different image parameters in a short time (such as 50 milliseconds, 100 milliseconds, or other). It can also be considered that the camera has finished shooting different objects at the same time.
  • the embodiments of the present invention provide an image processing method and device.
  • the camera acquires a first image and a second image with different exposure times according to preset image parameters. After that, the camera obtains information about the vehicle (included in the first image) and information about things outside the vehicle (included in the second image) based on the first image and the second image.
  • one camera in the embodiment of the present invention can not only obtain images with different exposure times, but also obtain the information of the target shooting object in the image, which can fulfill the functions of multiple cameras in the prior art. Compared with the existing technology, the cost is effectively reduced and the deployment space is saved.
  • the embodiment of the present invention only takes the image parameter including the exposure time as an example for description, and does not limit the image parameter.
  • the image parameters may include other parameters (for example, aperture, ISO, white balance, exposure compensation, etc.) or a combination of multiple parameters.
  • the exposure time of the first image is less than the exposure time of the second image.
  • the camera uses an image sensor to obtain images.
  • the camera in the embodiment of the present invention may include at least one image sensor.
  • the camera may use the same image sensor to obtain the first image and the second image with different exposure times, or use different image sensors to obtain the first image and the second image with different exposure times.
  • the camera uses an image sensor to shoot to obtain the first image and the second image.
  • the image sensor is used for shooting at the first time to obtain the first image, and the same image sensor is used for shooting at the second time to obtain the second image.
  • the time difference between the first moment and the second moment is less than the preset duration, and the preset duration is on the order of milliseconds.
  • the preset duration may be 50 milliseconds.
  • the preset duration may also be 10 milliseconds, or 50 milliseconds, or 100 milliseconds, 200 milliseconds, 500 milliseconds, etc., which is not limited in the embodiment of the present invention.
  • the camera uses different image sensors to obtain the first image and the second image respectively.
  • the first image sensor is used for shooting at the first time to obtain the first image
  • the second image sensor is used for shooting at the second time to obtain the second image.
  • the first moment and the second moment may be the same or different.
  • the time difference between the first time and the second time may be less than the preset time length, which is on the order of milliseconds.
  • the preset duration may be 50 milliseconds.
  • the preset duration may also be 10 milliseconds, or 50 milliseconds, or 100 milliseconds, 200 milliseconds, 500 milliseconds, etc., which is not limited in the embodiment of the present invention.
  • the image processing method provided by the embodiment of the present invention is applied to road traffic monitoring scenes, such as monitoring scenes at intersections, monitoring scenes at cell gates, and the like.
  • FIG. 3 shows a schematic diagram of a hardware structure of a camera in an embodiment of the present invention.
  • the camera may include a processor 30, a memory 31, a universal serial bus (USB) interface 32, a charging management module 33, a power management module 34, a battery 35, a sensor module 36, and buttons 37.
  • the sensor module 36 may include an image sensor 36A, a distance sensor 36B, a proximity light sensor 36C, a temperature sensor 36D, an ambient light sensor 36E, and so on.
  • the camera may include 1 or N image sensors 36A, and N is a positive integer greater than 1.
  • the camera further includes a display screen 310, a peripheral interface 311, and the like.
  • the processor 30 may include one or more processing units.
  • the processor 30 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller can be the nerve center and command center of the camera.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 30 may be used to collect the digital image signal sent by the image sensor 36A and perform statistics on the collected image data; it may also be used to adjust various parameters of the image sensor 36A according to the statistical results or user settings to Achieve the image effect required by the algorithm or the customer, such as adjusting the image sensor's exposure time, gain and other parameters; it can also be used to select the correct image processing parameters for the images taken under different environmental conditions to ensure the image quality and be a system for identifying objects Provide guarantee; it can also be used to crop the original image input by the image sensor 36A to output the image resolution required by other users.
  • the processor 30 can process digital image signals sent by the same image sensor 36A, or can process digital image signals sent by different image sensors 36A.
  • the processor 30 processes the digital image signal sent by the image sensor 36A.
  • a memory may also be provided in the processor 30 to store instructions and data.
  • the memory in the processor 30 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 30. If the processor 30 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 30 is reduced, and the efficiency of the system is improved.
  • the processor 30 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous transmitter) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • Ethernet interface Ethernet interface
  • USB universal serial bus
  • the memory 31 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 30 executes various functional applications and data processing of the camera by running instructions stored in the memory 31. For example, in the embodiment of the present invention, the processor 30 may obtain information about the vehicle and information about things outside the vehicle based on the first image and the second image by executing instructions stored in the memory 31.
  • the memory 31 may include a program storage area and a data storage area.
  • the storage program area can store an operating system, an application program (such as an image processing function, etc.) required by at least one function.
  • the data storage area can store data created and generated during the use of the camera (such as vehicle information, information on things outside the vehicle), etc.
  • the memory 31 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • a non-volatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • the charging management module 33 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 33 may receive the charging input of the wired charger through the USB interface 32. In some embodiments of wireless charging, the charging management module 33 may receive the wireless charging input through the wireless charging coil of the camera.
  • the charging management module 33 charges the battery 35, it can also supply power to the camera through the power management module 34.
  • the power management module 34 is used to connect the battery 35, the charging management module 33 and the processor 30.
  • the power management module 34 receives input from the battery 35 and/or the charging management module 33, and supplies power to the processor 30, the memory 31, the camera 38, the display screen 310, and the like.
  • the power management module 34 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 34 may also be provided in the processor 30.
  • the power management module 34 and the charging management module 33 may also be provided in the same device.
  • the distance sensor 36B is used to measure distance.
  • the camera can measure distance by infrared or laser. In some embodiments, when shooting a scene, the camera can use the distance sensor 36B to measure the distance to achieve fast focus.
  • the proximity light sensor 36C may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the camera emits infrared light through the light-emitting diode.
  • the camera uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the camera. When insufficient reflected light is detected, the camera can determine that there is no object near the camera.
  • the temperature sensor 36D is used to detect temperature.
  • the camera uses the temperature detected by the temperature sensor 36D to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 36D exceeds a threshold value, the camera executes to reduce the performance of the processor located near the temperature sensor 36D in order to reduce power consumption and implement thermal protection.
  • the camera when the temperature is lower than another threshold, the camera heats the battery 35 to avoid abnormal shutdown of the camera due to low temperature. In some other embodiments, when the temperature is lower than another threshold, the camera boosts the output voltage of the battery 35 to avoid abnormal shutdown caused by low temperature.
  • the ambient light sensor 36E is used to sense the brightness of the ambient light.
  • the camera can adaptively adjust the brightness of the display screen 310 according to the perceived brightness of the ambient light.
  • the ambient light sensor 36E can also be used to automatically adjust the white balance when taking pictures.
  • the button 37 includes a power button and the like.
  • the button 37 may be a mechanical button or a touch button.
  • the camera can receive key input and generate key signal input related to user settings and function control of the camera.
  • the camera 38 is used to capture still images or videos.
  • the object generates an optical image through the camera 38 and is projected to the image sensor 36A.
  • the image sensor 36A may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the image sensor 36A converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to be converted into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats.
  • the camera may include 1 or N cameras 38, and N is a positive integer greater than 1. Generally, there is a one-to-one correspondence between the camera and the image sensor. Exemplarily, if the camera includes N cameras 38 in the embodiment of the present invention, the camera includes N image sensors 36A.
  • the camera realizes the display function through the GPU, the display 310, and the application processor.
  • the GPU is an image processing microprocessor, which is connected to the display screen 310 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 30 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 310 is used to display images, videos, etc.
  • the display screen 310 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • emitting diode AMOLED
  • FLED flexible light-emitting diode
  • QLED quantum dot light emitting diode
  • the camera may include 1 or N display screens 310, and N is a positive integer greater than 1.
  • the display screen 310 may be used to display the first image and the second image, or to display vehicles and things outside the vehicle.
  • the camera can realize the shooting function through ISP, camera 38, video codec, GPU, display 310 and application processor.
  • the ISP is used to process the data fed back from the camera 38. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 38.
  • the network interface 39 is mainly used to identify the upload of analysis results and the sending of images and data streams. At the same time, the network interface receives the configuration parameters of the system and transmits them to the processor 30.
  • the peripheral interface 311 can be connected to external devices such as a target object detector, a red light signal detector, a radar, an ETC antenna, etc., to ensure the scalability of the system.
  • Both the above-mentioned network interface 39 and peripheral interface 311 may be referred to as communication interfaces.
  • the device structure shown in FIG. 3 does not constitute a specific limitation on the camera.
  • the camera may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the image processing method provided by the embodiment of the present invention will be described below in conjunction with the camera shown in FIG. 3. Among them, the camera mentioned in the following method embodiments may have the components shown in FIG. 3, and will not be repeated.
  • the cameras in the embodiments of the present invention may use the same image sensor to obtain images with different exposure times, or use different image sensors to obtain images with different exposure times. Now, the situation where the camera uses the same image sensor to obtain images with different exposure times will be explained.
  • FIG. 4 is a schematic flowchart of an image processing method provided by an embodiment of the present invention. As shown in FIG. 4, the image processing method provided by the embodiment of the present invention includes:
  • the camera uses the image sensor in the camera to shoot at the first moment according to the first preset image parameter, so as to obtain the first image.
  • the first preset image parameter includes the first exposure time.
  • the exposure time can reflect how much light enters the camera during the photo or video process. In general, the longer the exposure time, the more light enters the camera. Long exposure time is suitable for scenes with poor light conditions, on the contrary, short exposure time is suitable for scenes with better light conditions.
  • the first preset image parameter may be a system default parameter, or may be preset by the user according to requirements, which is not limited in the embodiment of the present invention.
  • the first preset image parameter further includes at least one of a first frame rate, a first exposure compensation coefficient, a first gain, or a first shutter speed.
  • the first preset image parameters may also include related parameters such as backlight and white balance, which will not be listed here.
  • FIG. 5 shows a configuration interface of image parameters in the camera.
  • the exposure compensation coefficient for acquiring the first image that is, the above-mentioned first exposure compensation coefficient
  • the shutter speed for acquiring the first image that is, the above-mentioned first shutter speed
  • the gain of acquiring the first image that is, the above-mentioned first gain
  • the user can click the corresponding button to modify each parameter shown in Figure 5 according to actual needs.
  • the camera uses an image sensor to shoot at the second moment according to the second preset image parameter to obtain a second image.
  • the second preset image parameter includes a second exposure time.
  • the second exposure time is different from the first exposure time.
  • the second preset image parameter may be a system default parameter, or may be preset by the user according to requirements, which is not limited in the embodiment of the present invention.
  • the second preset image parameter further includes at least one of a second frame rate, a second exposure compensation system, a second gain, or a second shutter speed.
  • the second preset image parameters may also include related parameters such as backlight and white balance.
  • FIG. 6 shows a configuration interface of image parameters in the camera.
  • the exposure compensation coefficient for acquiring the second image that is, the second exposure compensation coefficient
  • the shutter speed for acquiring the first image that is, the second shutter speed
  • the gain for acquiring the second image that is, the above-mentioned second gain
  • the user can modify each parameter shown in FIG. 6 according to actual needs.
  • the camera uses the same image sensor to obtain the first image and the second image, and the image parameters used to obtain the first image and the second image are different, the camera needs to obtain the first image and the second image at different times.
  • the time difference between the first moment and the second moment is less than a preset duration, and the preset duration is on the order of milliseconds.
  • the preset duration is 10 milliseconds, or 50 milliseconds, or 100 milliseconds. In this case, it can be considered that the camera has finished shooting different subjects in the same shooting scene at the same time.
  • the camera in the embodiment of the present invention can acquire the image of the offending vehicle and the image of things outside the vehicle within a preset time period, and the acquired image has a high definition. In this way, it is conducive to subsequent acquisition of the license plate number and information of things outside the vehicle (such as the facial features of pedestrians), and it provides favorable evidence for the violation processing center to notify the violation vehicle.
  • the camera can obtain clear images of the vehicle and the images of things outside the vehicle, which can also provide certain assistance to the public security organs and other relevant units in detecting cases.
  • the camera in the embodiment of the present invention can collect images in real time to obtain the first image and the second image, or can collect images when it is determined that the speed of the vehicle exceeds a preset value (that is, images are collected when the vehicle is speeding) ,
  • a preset value that is, images are collected when the vehicle is speeding
  • the embodiment of the present invention does not limit this.
  • the camera in the embodiment of the present invention may shoot the same shooting scene to obtain the first image and the second image, or may shoot different shooting scenes to obtain the first image and the second image.
  • the implementation of the present invention The example does not limit this.
  • the camera's shooting angle when the camera's shooting angle is A, it can acquire vehicle images and pedestrian images within 10 milliseconds; or, if the camera's shooting angle is A at a certain moment, the camera has acquired the vehicle image, and the camera's The camera angle is B, and the pedestrian image is acquired.
  • the camera in the embodiment of the present invention may first execute S400 and then execute S401, or may first execute S401 and then execute S400, which is not limited in the embodiment of the present invention.
  • the camera obtains information about the vehicle and information about things outside the vehicle according to the first image and the second image.
  • the vehicle is included in the first image, and things outside the vehicle are included in the second image.
  • the things outside the vehicle include at least one of pedestrians, animals, non-motor vehicles other than vehicles, or drivers of non-motor vehicles other than vehicles.
  • things outside the vehicle may also include other objects that run at a slower speed or are in a static state, such as tall buildings, traffic warning signs, etc., which are not limited in the embodiment of the present invention.
  • the information about the things outside the vehicle may include facial features, gender, age group, clothes color, etc., which are not limited in the embodiment of the present invention.
  • the information of the vehicle in the embodiment of the present invention includes the license plate number.
  • the vehicle information may also include vehicle brand, body color, vehicle model, etc.
  • the camera in the embodiment of the present invention can adopt the following implementation I and implementation II to obtain vehicle information and information about things outside the vehicle.
  • Implementation method I The camera uses the first preset encoding algorithm to encode the first image to obtain the encoded first image, and performs image detection on the encoded first image, and then the camera detects the encoded first image If there is a vehicle in the encoded first image, the camera obtains the vehicle information; in addition, the camera uses the second preset encoding algorithm to encode the second image to obtain the encoded second image, and then , The camera detects whether there are things outside the vehicle in the encoded second image; if there are things outside the vehicle in the encoded second image, the camera obtains the information of the things outside the vehicle.
  • first preset encoding algorithm and second preset encoding algorithm can be any image encoding algorithm in the prior art, for example, predictive encoding algorithm, transform encoding algorithm, quantization encoding algorithm, etc., which will not be described here. A repeat.
  • the camera recognizes the characteristics of the vehicle to obtain the information of the vehicle.
  • the camera recognizes the characteristics of the things outside the vehicle to obtain information about the things outside the vehicle.
  • the camera in the embodiment of the present invention also needs to detect whether there is a vehicle and whether there are things outside the vehicle according to corresponding detection parameters. Further, the camera recognizes the information of the vehicle and the information of things outside the vehicle based on the detection parameters. Wherein, the detection parameters are defaulted by the system or set by the user according to actual needs, which is not limited in the embodiment of the present invention.
  • the information about things outside the vehicle may include face position information (faceRect), face feature point information (faceRect), and face posture information.
  • the human face posture information may include a human face pitch angle (pitch), an in-plane rotation angle (roll), and a human face yaw degree (ie, a left-right rotation angle, yaw).
  • Human face yaw refers to the left-right rotation angle of the user's face orientation relative to the "line connecting the camera's camera and the user's head”.
  • the camera may provide an interface (such as a Face Detector interface), and the interface may receive the second image taken by the camera. Then, the processor of the camera can encode the second image and detect the face to obtain the above-mentioned features of the face. Finally, the camera can return the detection result (JSON Object), that is, the features of the aforementioned face.
  • an interface such as a Face Detector interface
  • JSON detection result
  • an image (such as the first image) may include one or more human faces.
  • the camera can assign different IDs for the one or more faces to identify the faces.
  • “Height”: 1795 indicates that the height of the face (that is, the face area where the face is located in the first image) is 1795 pixels.
  • “Left”: 761 indicates that the distance between the face and the left boundary of the first image is 761 pixels.
  • “Top”: 1033 indicates that the distance between the face and the upper boundary of the first image is 1033 pixels.
  • “Width”: 1496 means that the width of the face is 1496 pixels.
  • “Pitch”: -2.9191732 indicates that the face pitch angle of the face whose face ID is 0 is -2.9191732°.
  • “Roll”: 2.732926 indicates that the in-plane rotation angle of the face whose face ID is 0 is 2.732926°.
  • the camera can also determine whether a person's eyes are open. For example, the camera can determine whether a person's eyes are open by the following methods: when the camera is performing face detection, it determines whether the person's iris information is collected; if the iris information is collected, it is determined that the person's eyes are open; if it is not collected Iris information confirms that the person's eyes are not opened.
  • face detection determines whether the person's iris information is collected
  • Iris information confirms that the person's eyes are not opened.
  • other existing technologies can also be used to detect whether the eyes are open.
  • the camera detects the encoded second image
  • the method for detecting whether the encoded second image includes a person can refer to the specific method of detecting human faces in the conventional technology, and the examples of the present invention will not be repeated one by one.
  • the camera After the camera obtains the first image and the second image, it performs encoding and image detection on the first image and the second image respectively, which effectively improves the processing efficiency of the camera, and the camera separately performs processing on the first image and the second image.
  • Image detection effectively guarantees the accuracy of the information about the vehicle and the information outside the vehicle.
  • Implementation mode II If the camera shoots the same shooting scene to obtain the first image and the second image, the camera uses a preset fusion algorithm to fuse the first image and the second image to generate a third image. The camera uses the third preset encoding algorithm to encode the third image to obtain the encoded third image. After that, the camera detects whether there are vehicles and things outside the vehicle in the encoded third image; if the encoded third image If there are vehicles and things outside the vehicle, the camera obtains information about the vehicles and things outside the vehicle.
  • the camera can use a preset fusion algorithm to fuse the first image and the second image into a third image that meets the configuration requirements.
  • the preset fusion algorithm may be any image fusion algorithm in the prior art, for example, a DSP fusion algorithm, an optimal stitching algorithm, etc., which will not be repeated here.
  • the camera After the camera merges the first image and the second image, it encodes the merged image (that is, the third image) and performs image detection.
  • the merged image that is, the third image
  • image detection For the method for the camera to encode the third image and image detection, reference may be made to the description of the method for the camera to encode the first image and image detection, which will not be repeated here.
  • a camera in the embodiment of the present invention can obtain images with different exposure times using one image sensor. Subsequently, the camera can obtain vehicle information and information about things outside the vehicle based on the acquired images.
  • the function of multiple cameras in the prior art Compared with the prior art, the solutions provided by the embodiments of the present invention effectively reduce costs and save deployment space.
  • the camera can also determine the information of the violating vehicle and the information of the violating person according to related algorithms (such as a preset algorithm for determining violating vehicles). information.
  • the camera can also send the information of the vehicle and the information of the things outside the vehicle to the platform (or server) connected to the camera network to facilitate the management of the platform (or server).
  • the clerk views the information of the vehicle and the information of things outside the vehicle.
  • the image processing method provided by the embodiment of the present invention may further include S701 after S402.
  • the camera sends information about the vehicle and/or information about things outside the vehicle to a platform (or server) connected to the camera network.
  • the administrator can readjust the first preset image parameters and the detection parameters referred to in obtaining the vehicle information. Subsequently, the camera can obtain the first image and the vehicle information according to the re-adjusted parameters.
  • the administrator can readjust the second preset image parameters and the detection parameters referenced by obtaining the information of the things outside the vehicle. Subsequently, the camera can obtain a second image and information about things outside the vehicle according to the re-adjusted parameters.
  • the image processing method provided by the embodiment of the present invention may further include S702 and S703.
  • the camera receives an adjustment instruction sent by a platform (or server) connected to the camera network.
  • the adjustment instruction is used to adjust the first preset image parameter, the second preset image parameter, the first detection parameter (the parameter referenced by obtaining the information of the vehicle) or the second detection parameter (the parameter referenced by obtaining the information of things outside the vehicle) ) At least one of.
  • the camera adjusts corresponding parameters according to the adjustment instructions, and obtains images, information about the vehicle, and information about things outside the vehicle according to the adjusted parameters.
  • the camera re-executes S400 to S402 according to the adjusted parameters.
  • the image processing method provided by the embodiments of the present invention can not only effectively reduce costs and save deployment space, but can also adjust parameters in real time according to the needs of the administrator to obtain image and object information that meets the needs of the administrator.
  • the camera in the embodiment of the present invention may also use different image sensors to obtain images with different exposure times. This situation will now be explained.
  • FIG. 8 is a schematic flowchart of another image processing method provided by an embodiment of the present invention. As shown in FIG. 8, the image processing method provided by the embodiment of the present invention includes:
  • the camera uses the first image sensor in the camera to shoot at the first moment according to the first preset image parameters to obtain the first image.
  • the camera uses the second image sensor in the camera to shoot at the second moment according to the second preset image parameters to obtain a second image.
  • the camera in the embodiment of the present invention can perform S800 first, then S801, or perform S801 first, then S800, or perform S400 and S401 at the same time. There is no restriction on this.
  • the camera obtains information about the vehicle and information about things outside the vehicle according to the first image and the second image.
  • a camera in the embodiment of the present invention can obtain images with different exposure times. Subsequently, the camera can obtain vehicle information and information about things outside the vehicle based on the acquired images, which completes many of the existing technologies. The function of a camera. Compared with the prior art, the solutions provided by the embodiments of the present invention effectively reduce costs and save deployment space.
  • the camera can also send the information of the vehicle and the information of the things outside the vehicle to the platform (or server) connected to the camera network, so that the administrator can view the information of the vehicle. And information about things outside the car.
  • the image processing method provided by the embodiment of the present invention may further include S901 after S802.
  • the camera sends information about the vehicle and/or information about things outside the vehicle to a platform (or server) connected to the camera network.
  • the administrator can readjust the first preset image parameters and the detection parameters referred to in obtaining the vehicle information. Subsequently, the camera can obtain the first image and the vehicle information according to the re-adjusted parameters.
  • the administrator can readjust the second preset image parameters and the detection parameters referenced by obtaining the information of the things outside the vehicle. Subsequently, the camera can obtain a second image and information about things outside the vehicle according to the re-adjusted parameters.
  • the image processing method provided by the embodiment of the present invention may further include S902 and S903.
  • the camera receives an adjustment instruction sent by a platform (or server) connected to the camera network.
  • the adjustment instruction is used to adjust the first preset image parameter, the second preset image parameter, the first detection parameter (the parameter referenced by obtaining the information of the vehicle) or the second detection parameter (the parameter referenced by obtaining the information of things outside the vehicle) ) At least one of.
  • the camera adjusts corresponding parameters according to the adjustment instructions, and obtains images, information about the vehicle, and information about things outside the vehicle according to the adjusted parameters.
  • the image processing method provided by the embodiments of the present invention can not only effectively reduce costs and save deployment space, but can also adjust parameters in real time according to the needs of the administrator to obtain image and object information that meets the needs of the administrator.
  • the embodiment of the present invention may divide the above-mentioned service nodes and the like into functional modules according to the above-mentioned method examples.
  • each function module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. It should be noted that the division of modules in the embodiment of the present invention is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 10 it is a schematic structural diagram of a camera provided by an embodiment of the present invention.
  • the camera 100 shown in FIG. 10 can be applied to a road traffic monitoring scene.
  • the camera 100 can be used to execute the steps performed by the camera in any of the image processing methods provided above.
  • the camera 100 may include: an acquisition unit 1001 and a processing unit 1002. Wherein, the acquisition unit 1001 is used to acquire the first image and the second image.
  • the processing unit 1002 is used to obtain information about the vehicle and information about things outside the vehicle. Exemplarily, the collection unit 1001 may be used to execute S400, S401, S800, and S801. The processing unit 1002 may be used to execute S402, S802, S703, and S903.
  • the camera further includes a sending unit 1003 and a receiving unit 1004.
  • the sending unit 1003 is used to send information about the vehicle and information about things outside the vehicle.
  • the receiving unit 1004 is configured to receive an adjustment instruction.
  • the sending unit 1003 may be used to execute S701 and S901.
  • the receiving unit 1004 may be used to perform S702 and S902.
  • the receiving unit 1004 and the sending unit 1003 in the camera 100 may correspond to the network interface 39 or the peripheral interface 311 in FIG. 3, and the processing unit 1002 may correspond to the processor 30 and the acquisition unit 1001 in FIG. It can correspond to the image sensor 36A in FIG. 3.
  • Another embodiment of the present invention also provides a computer-readable storage medium that stores instructions in the computer-readable storage medium.
  • the instructions When the instructions are executed on a camera, the camera executes the method flow shown in the above method embodiment. The various steps.
  • a computer program product in another embodiment, includes computer-executable instructions stored in a computer-readable storage medium; at least one processor of the camera can be accessed from the computer The reading storage medium reads the computer-executed instruction, and at least one processor executes the computer-executed instruction to enable the camera to execute each step executed by the camera in the method flow shown in the above method embodiment.
  • the above embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • a software program it may appear in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer program instructions When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present invention are generated in whole or in part.
  • the computer can be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices.
  • Computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • computer instructions can be transmitted from a website, computer, server, or data center through a cable (such as Coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data terminal such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be It can be combined or integrated into another device, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate parts may or may not be physically separate.
  • the parts displayed as units may be one physical unit or multiple physical units, that is, they may be located in one place, or they may be distributed to multiple different places. . Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present invention are essentially or the part that contributes to the prior art, or all or part of the technical solutions can be embodied in the form of a software product, and the software product is stored in a storage medium. It includes several instructions to make a device (may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the method described in each embodiment of the present invention.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program code .

Abstract

一种图像处理方法及装置,涉及图像处理技术领域,能够解决成本高、空间占用较大的问题。摄像机根据第一预设图像参数在第一时刻采用该摄像机中的图像传感器进行拍摄,以获取第一图像,并根据第二预设图像参数在第二时刻采用所述图像传感器进行拍摄,以获取第二图像。后续,摄像机根据获取到的第一图像和第二图像,获得车辆的信息和车外事物的信息。其中,第一预设图像参数包括第一曝光时间,第二预设图像参数包括第二曝光时间,第二曝光时间与第一曝光时间不同,第一图像中存在车辆,第二图像中存在车外事物。

Description

一种图像处理方法及装置 技术领域
本发明涉及图像处理技术领域,尤其涉及一种图像处理方法及装置。
背景技术
现有的摄像机(图1所示的现实生活中各种类型的摄像机)既可以实现简单的监控功能,也可以实现违章抓拍、运动物体跟踪等功能。
在对局部信息要求不高的场景下,如对行人进行人脸识别,摄像机需要处于快门速度慢、曝光时间较长的状态,这样,摄像机拍摄的图像整体画面的亮度较高,清晰度较好。相反,在对局部信息要求较高的场景下,如识别行驶中车辆的车辆信息(如车牌),摄像机需要处于快门速度快、曝光时间较短的状态,这样,摄像机拍摄的图像清晰度较好。可以看出,不同场景对摄像机的要求不同。为此,现实生活中存在多种类型的摄像机,如:人脸摄像机和微卡摄像机,人脸摄像机抓拍人脸的效果较好,微卡摄像机抓拍车牌的效果好。
现实生活中,为了监控同一区域中不同类型的对象,经常会将多个摄像机安装到同一路杆上,如图2所示。可以看出,监控不同类型的对象所需要的摄像机较多,导致成本较高,所需空间也较大。
发明内容
本申请提供一种图像处理方法及装置,用于解决成本高、空间占用较大的问题。
为达到上述目的,本申请实施例采用如下技术方案:
第一方面,提供一种图像处理方法,该图像处理方法应用于包括摄像机的道路交通监控场景。具体的,摄像机根据第一预设图像参数(包括第一曝光时间)在第一时刻采用该摄像机中的图像传感器进行拍摄,以获取第一图像,并根据第二预设图像参数(包括与第一曝光时间不同的第二曝光时间)在第二时刻进行拍摄,以采用所述图像传感器获取第二图像。后续,摄像机根据获取到的第一图像和第二图像,获得车辆(包括在第一图像中)的信息和车外事物(包括在第二图像中)的信息。
可以看出,本申请中的摄像机采用图像传感器既可以采用第一曝光时间获取到第一图像,又可以采用第二曝光时间获取到第二图像,使得第一图像中的目标拍摄对象(如车辆)和第二图像中的目标拍摄对象(如车外事物)分别能够具备合理的曝光时间,从而都能够清晰呈现。本申请的摄像机中的图像传感器能够获取到不同曝光时间的图像。此外,摄像机还能够获得图像中对象(如车辆、车外事物)的信息。综上,本申请中的一台摄像机即可完成现有技术中多台摄像机的功能,相比于现有技术,有效地减少了成本,节省了部署空间。
第二方面,提供一种图像处理方法,摄像机根据第一预设图像参数(包括第一曝光时间)在第一时刻采用该摄像机中的第一图像传感器进行拍摄,以获取第一图像,并根据第二预设图像参数(包括与第一曝光时间不同的第二曝光时间)在第二时刻采用该摄像机中的第二图像传感器进行拍摄,以获取第二图像。后续,摄像机根据获取 到的第一图像和第二图像,获得车辆(包括在第一图像中)的信息和车外事物(包括在第二图像中)的信息。
本申请的摄像机包括多个图像传感器,不同图像传感器能够获取到不同曝光时间的图像,使得第一图像中的目标拍摄对象(如车辆)和第二图像中的目标拍摄对象(如车外事物)分别能够具备合理的曝光时间,从而都能够清晰呈现。如此,一台摄像机可完成现有技术中多台摄像机的功能,相比于现有技术,有效地减少了成本,节省了部署空间。
在上述第一方面或第二方面的一种可能的实现方式中,上述“摄像机根据第一图像和第二图像,获得车辆的信息和车外事物的信息”的方法为:摄像机采用第一预设编码算法,对第一图像进行编码,得到编码后的第一图像,之后,摄像机检测编码后的第一图像中是否存在车辆;若编码后的第一图像中存在车辆,则摄像机获得车辆的信息;摄像机采用第二预设编码算法,对第二图像进行编码,得到编码后的第二图像,之后,摄像机检测编码后的第二图像中是否存在车外事物;若编码后的第二图像中存在车外事物,则摄像机获得车外事物的信息。
在获取到第一图像和第二图像后,摄像机可以对获取到的图像分别进行编码、图像检测等处理,有效地提高了获取到车辆的信息和车外事物的信息的准确性。
在上述第一方面或第二方面的另一种可能的实现方式中,上述第一图像和上述第二图像为摄像机对相同拍摄场景进行拍摄得到的。相应的,上述“摄像机根据第一图像和第二图像,获得车辆的信息和车外事物的信息”的方法为:摄像机采用预设的融合算法,将第一图像和第二图像融合,生成第三图像,后续,摄像机采用第三预设编码算法,对第三图像进行编码,得到编码后的第三图像;在得到编码后的第三图像后,摄像机检测编码后的第三图像是否存在车辆和车外事物;若编码后的第三图像存在车辆和车外事物,则摄像机获得车辆的信息和车外事物的信息。
为了提高图像的质量,以及图像中信息的利用率,在获取到同一拍摄场景的第一图像和第二图像后,摄像机可以将第一图像和第二图像融合,生成高质量的第三图像。这样,摄像机对第三图像进行编码、图像检测等处理,即可准确地获取到车辆的信息和车外事物的信息。
本申请的摄像机可以对相同拍摄场景进行拍摄,以获取第一图像和第二图像,也可以对不同拍摄场景进行拍摄,以获取第一图像和第二图像,本申请不作限定。
本申请的摄像机在获取到第一图像和第二图像后,可以采用不同的处理方式获取车辆的信息和车外事物的信息。
在上述第一方面或第二方面的另一种可能的实现方式中,上述第一预设图像参数还包括第一帧率、第一曝光补偿系数、第一增益或第一快门速度中的至少一个;上述第二预设图像参数还包括第二帧率、第二曝光补偿系统、第二增益或第二快门速度中的至少一个。
在上述第一方面或第二方面的另一种可能的实现方式中,车辆的信息包括车牌号,车外事物包括行人,动物,上述车辆之外的非机动车,或者上述车辆之外的非机动车的驾驶员中的至少一个。
在上述第一方面或第二方面的另一种可能的实现方式中,摄像机在获取到车辆的 信息和车外事物的信息后,可以在该摄像机的配置界面显示车辆的信息和车外事物的信息,或者向其他设备/平台(例如交通违法处理中心的服务器)发送车辆的信息和车外事物的信息。这样,执法人员可根据车辆的信息和车外事物信息完成相应处理(例如记录违章)。
第三方面,提供一种摄像机,该摄像机能够实现第一方面、第二方面或者上述任意一种可能的实现方式中的功能。这些功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
该摄像机可以包括采集单元和处理单元,该采集单元和处理单元可以执行上述第一方面及其任意一种可能的实现方式所述的图像处理方法中的相应功能。例如:上述采集单元,用于根据第一预设图像参数,在第一时刻采用摄像机中的图像传感器进行拍摄,以获取第一图像,该第一预设图像参数包括第一曝光时间,以及用于根据第二预设图像参数,在第二时刻采用图像传感器进行拍摄,以获取第二图像,该第二预设图像参数包括第二曝光时间,第二曝光时间与第一曝光时间不同。上述处理单元,用于根据上述采集单元获取到的第一图像和第二图像,获得车辆的信息和车外事物的信息,第一图像中存在车辆,第二图像中存在车外事物。
第四方面,提供一种摄像机,该摄像机能够实现第一方面、第二方面或者上述任意一种可能的实现方式中的功能。这些功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
该摄像机可以包括采集单元和处理单元,该采集单元和处理单元可以执行上述第一方面及其任意一种可能的实现方式的图像处理方法中的相应功能。例如:上述采集单元,用于根据第一预设图像参数,在第一时刻采用摄像机中的第一图像传感器进行拍摄,以获取第一图像,该第一预设图像参数包括第一曝光时间,以及根据第二预设图像参数,在第二时刻采用摄像机中的第二图像传感器进行拍摄,以获取第二图像,第二图像传感器与第一图像传感器不同,该第二预设图像参数包括第二曝光时间,第二曝光时间与第一曝光时间不同。上述处理单元,用于根据上述采集单元获取到的第一图像和第二图像,获得车辆的信息和车外事物的信息,第一图像中存在车辆,第二图像中存在车外事物。
在上述第三方面或第四方面的一种可能的实现方式中,上述处理单元具体用于:采用第一预设编码算法,对第一图像进行编码,得到编码后的第一图像;检测编码后的第一图像中是否存在车辆;在编码后的第一图像中存在车辆的情况下,获得车辆的信息;采用第二预设编码算法,对第二图像进行编码,得到编码后的第二图像;检测编码后的第二图像中是否存在车外事物;在编码后的第二图像中存在车外事物的情况下,获得车外事物的信息。
示例性的,若编码后的第二图像中存在车外事物,且车外事物包括行人,则摄像机获得该行人的人脸特征。
在上述第三方面或第四方面的另一种可能的实现方式中,第一图像和所述第二图像为上述采集单元对相同拍摄场景进行拍摄得到的。相应的,上述处理单元具体用于:采用预设的融合算法,将第一图像和第二图像融合,生成第三图像;采用第三预设编码算法,对第三图像进行编码,得到编码后的第三图像;检测编码后的第三图像是否 存在车辆和车外事物;在编码后的第三图像中存在车辆和车外事物的情况下,获得车辆的信息和车外事物的信息。
在上述第三方面或第四方面的另一种可能的实现方式中,上述第一预设图像参数还包括第一帧率、第一曝光补偿系数、第一增益或第一快门速度中的至少一个;上述第二预设图像参数还包括第二帧率、第二曝光补偿系统、第二增益或第二快门速度中的至少一个。
在实际应用中,摄像机在获取图像的过程中通常需要参考大量的参数,如曝光时间、帧率、快门速度、增益等。
在上述第三方面或第四方面的另一种可能的实现方式中,上述车辆的信息包括车牌号,上述车外事物包括行人,动物,上述车辆之外的非机动车,或者上述车辆之外的非机动车的驾驶员中的至少一个。
当然,车辆的信息也可以包括车辆品牌、车身颜色、车辆型号等。若车外事物包括人,则车外事物的信息可以包括人脸特征、性别、年龄段、衣服颜色等。
第五方面,提供一种摄像机,该摄像机一个或多个处理器,以及存储器;所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,该计算机程序代码包括指令。当所述一个或多个处理器执行所述指令时,所述摄像机实现执行如上述第一方面、第二方面或上述各种可能的实现方式所述的图像处理方法。
可选的,该摄像机还包括通信接口,该通信接口用于执行上述第一方面、第二方面或上述各种可能的实现方式所述的图像处理方法中收发数据、信令或信息的步骤,例如,发送车辆的信息和车外事物的信息。
第六方面,还提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令;当指令在摄像机上运行时,摄像机执行如上述第一方面、第二方面或上述各种可能的实现方式所述的图像处理方法。
第七方面,还提供一种计算机程序产品,该计算机程序产品包括指令,当指令在摄像机上运行时,摄像机执行如上述第一方面、第二方面或上述各种可能的实现方式所述的图像处理方法。
第八方面,还提供一种系统芯片,该系统芯片应用在摄像机中,所述摄像机包括至少一个处理器,涉及的指令在所述至少一个处理器中执行,以使得摄像机执行如上述第一方面、第二方面或上述各种可能的实现方式所述的图像处理方法。
可选的,在上述任一方面或者上述任意一种可能的实现方式中,摄像机可以实时采集图像,以获取第一图像和第二图像,也可以在确定车辆的车速超过预设值时采集图像(即在车辆超速的情况下采集图像),以获取第一图像和第二图像,还可以在确定车辆的运动轨迹符合预设曲线时采集图像(如车辆行驶过程中压实线的违章行为),以获取第一图像和第二图像,本申请对此不作限定。
本申请的摄像机可以应用于十字路口、小区门口等道路交通场景,一台摄像机可以抓拍到车辆以及车外事物,且图像清晰度较高。摄像机在进行图像检测后,可以较为准确地获取到车辆的信息和车外事物的信息。对于不遵守交通规则的行人或驾驶员有强大的威慑作用,提高了行人或者车外事物的安全性。对于执法人员而言,提供较为准确的车辆的信息和车外事物的信息,有助于案件的侦破。
当然,上述车辆和车外事物也可以替换为其他对象,本申请对此不作限定。摄像机获得的对象的类型主要取决于应用场景。
需要说明的是,上述计算机指令可以全部或者部分存储在第一计算机存储介质上,其中,第一计算机存储介质可以与摄像机的处理器封装在一起的,也可以与摄像机的处理器单独封装,本申请对此不作限定。
本申请中第三方面、第四方面、第五方面、第六方面、第七方面、第八方面及其各种实现方式的描述,可以参考第一方面、第二方面或各种实现方式中的详细描述;并且,第三方面、第四方面、第五方面、第六方面、第七方面、第八方面及其各种实现方式的有益效果,可以参考第一方面、第二方面或各种实现方式中的有益效果分析,此处不再赘述。
在本申请中,上述摄像机的名字对设备或功能模块本身不构成限定,在实际实现中,这些设备或功能模块可以以其他名称出现。只要各个设备或功能模块的功能和本申请类似,属于本申请权利要求及其等同技术的范围之内。
本申请的这些方面或其他方面在以下的描述中会更加简明易懂。
附图说明
图1为本发明实施例中摄像机的示意图;
图2为实际应用中摄像机的部署示意图;
图3为本发明实施例中摄像机的硬件结构示意图;
图4为本发明实施例中图像处理方法的流程示意图一;
图5为本发明实施例中第一预设图像参数的配置界面示意图;
图6为本发明实施例中第二预设图像参数的配置界面示意图;
图7为本发明实施例中图像处理方法的流程示意图二;
图8为本发明实施例中图像处理方法的流程示意图三;
图9为本发明实施例中图像处理方法的流程示意图四;
图10为本发明实施例中摄像机的结构示意图。
具体实施方式
本发明实施例的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同目标,而不是用于限定特定顺序。
在本发明实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本发明实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
摄像机在拍摄图像时,对于不同的拍摄对象往往需要使用不同的图像参数,这样,拍摄出的图像才能获得良好的拍摄效果。例如:高速行驶中的汽车往往需要短曝光(高快门速度)以免图像模糊,而公路边缓慢行走的行人可以使用长曝光(低快门速度)以获得更多的图像细节。再例如:被不同色温的光源所照射的对象,需要使用不同的白平衡。此外,光线黯淡时,使用较高感光度(International Standards Organization,ISO)值拍摄得到的图像效果更好,而光线明亮时使用较低的ISO值拍摄得到的图像效果更佳。
现有技术中,为了拍摄不同的对象,不得不使用不同的摄像机。例如:在拍摄同一个场景时,拍摄行驶中的车辆使用短曝光的摄像机,而拍摄行人使用长曝光的摄像机,造成了管理的复杂和成本的浪费。
为此,本发明实施例使用同一个摄像机拍摄出不同图像参数的两张(或者两张以上)图像,每张图像中的目标对象都有良好的拍摄效果。例如,摄像机拍摄出第一图像和第二图像。此外,摄像机还使用不同的图像参数,分别对拍摄出的图像进行检测,以获得目标拍摄对象的信息。例如,摄像机对第一图像和第二图像进行检测,以获得车辆的信息和车外事物的信息。
示例性的,在发生车辆与行人碰撞或者刮擦等交通事故时,摄像机既可以拍摄到清晰的受伤人员,也可以拍摄到清晰的肇事车辆。
可选的,摄像机可以对同一拍摄场景进行拍摄,以获取两张(或者多张)图像,也可以对不同拍摄场景进行拍摄,以获取两张(或者多张)图像。
在摄像机对同一拍摄场景进行拍摄,以获取两张(或者多张)图像的场景中,摄像机还可以进一步将两张(或者多张)图像融合到一个图像中。由于两张(或者多张)图像是摄像机对同一拍摄场景进行拍摄得到的,因此,将两张(或者多张)图像融合到一个图像中还能够进一步提高图像的清晰度,保证图像的完整性。
对于摄像机对同一拍摄场景进行拍摄,以获取两张(或者多张)图像的场景而言,本发明实施例仅使用了一个摄像机,便于实现两张图像的对照,以及完成图像的融合。而在现有技术中,对于不同的拍摄对象而言,需要使用多个摄像机进行拍摄。由于多个摄像机之间的物理位置通常会存在差异,即使把多个摄像机调整成相同的拍摄角度和图像比例,拍摄场景也无法做到相同。
可选的,摄像机可以是在很短时间内(如50毫秒、100毫秒或其他)拍摄出不同图像参数的图像。也可以认为,摄像机在同一时刻完成了对不同拍摄对象的拍摄。
示例性的,本发明实施例提供一种图像处理方法及装置,摄像机根据预设的图像参数获取到曝光时间不同的第一图像和第二图像。之后,该摄像机根据第一图像和第二图像,获得车辆(包括在第一图像中)的信息和车外事物(包括在第二图像中)的信息。
也就是说,本发明实施例中的一台摄像机既能够获取到不同曝光时间的图像,又能够获得图像中目标拍摄对象的信息,可完成现有技术中多台摄像机的功能。相比于现有技术,有效地减少了成本,节省了部署空间。
需要说明的是,本发明实施例只是以图像参数包括曝光时间为例进行说明,并不是对图像参数的限定。在其他实施例中,图像参数可以包括其他参数(例如光圈、ISO、白平衡、曝光补偿等)或者多个参数的组合。
在一种示例中,第一图像的曝光时间小于第二图像的曝光时间。
一般情况下,摄像机采用图像传感器获取图像。本发明实施例中的摄像机可以包括至少一个图像传感器。摄像机可以采用同一图像传感器获取曝光时间不同的第一图像和第二图像,也可以采用不同图像传感器获取曝光时间不同的第一图像和第二图像。
在一种实现方式中,摄像机采用一个图像传感器进行拍摄,以获取到第一图像和第二图像。例如:在第一时刻采用图像传感器进行拍摄,以获取第一图像,在第二时 刻采用同一图像传感器进行拍摄,以获取第二图像。
第一时刻与第二时刻之间的时间差小于预设时长,该预设时长为毫秒级别,例如,当前产品设计中,该预设时长可以是50毫秒。当然,在其他实施例中,该预设时长也可以为10毫秒,或者为50毫秒,又或者为100毫秒,200毫秒,500毫秒等,本发明实施例对此不作限定。
在另一种实现方式中,摄像机采用不同图像传感器分别获取第一图像和第二图像。例如:在第一时刻采用第一图像传感器进行拍摄,以获取第一图像,在第二时刻采用第二图像传感器进行拍摄,以获取第二图像。
在摄像机采用不同图像传感器分别获取第一图像和第二图像的场景中,第一时刻与第二时刻可以相同,也可以不同。若第一时刻与第二时刻不同,则第一时刻与第二时刻之间的时间差可以小于预设时长,该预设时长为毫秒级别。例如,当前产品设计中,该预设时长可以是50毫秒。当然,在其他实施例中,该预设时长也可以为10毫秒,或者为50毫秒,又或者为100毫秒,200毫秒,500毫秒等,本发明实施例对此不作限定。
本发明实施例提供的图像处理方法应用于道路交通监控场景,如十字路口的监控场景、小区门口的监控场景等。
为了便于理解,现在对本发明实施例中的摄像机的结构进行描述。
在一种示例中,图3示出了本发明实施例中摄像机的一种硬件结构示意图。如图3所示,该摄像机可以包括处理器30,存储器31,通用串行总线(universal serial bus,USB)接口32,充电管理模块33,电源管理模块34,电池35,传感器模块36,按键37,摄像头38、网络接口39等。其中,传感器模块36可以包括图像传感器36A、距离传感器36B、接近光传感器36C,温度传感器36D,环境光传感器36E等。可选的,摄像机可以包括1个或N个图像传感器36A,N为大于1的正整数。
可选的,摄像机还包括显示屏310、外设接口311等。
处理器30可以包括一个或多个处理单元,例如:处理器30可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是摄像机的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
示例性的,处理器30可以用于采集图像传感器36A发出的数字图像信号,并对采集的图像数据进行统计;还可以用于根据统计结果或者用户设置,调整图像传感器36A的各种参数,以达到算法或客户要求的图像效果,如调整图像传感器的曝光时间、增益等参数;还可以用于为不同环境条件下所拍摄的图像选择正确的图像处理参数,确保图像质量,为识别对象的系统提供保证;还可以用于对图像传感器36A输入的原始图像进行剪裁,以输出其他用户要求的图像分辨率。
可选的,若摄像机包括多个图像传感器36A,处理器30可以对同一图像传感器 36A发出的数字图像信号进行处理,也可以对不同图像传感器36A发出的数字图像信号进行处理。
当然,在摄像机包括一个图像传感器36A的场景中,处理器30对该图像传感器36A发出的数字图像信号进行处理。
作为一种实施例,处理器30中还可以设置存储器,用于存储指令和数据。
一种可能的实现方式中,处理器30中的存储器为高速缓冲存储器。该存储器可以保存处理器30刚用过或循环使用的指令或数据。如果处理器30需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器30的等待时间,因而提高了系统的效率。
作为一种实施例,处理器30可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,以太网接口,和/或通用串行总线(universal serial bus,USB)接口等。
存储器31可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器30通过运行存储在存储器31的指令,从而执行摄像机的各种功能应用以及数据处理。例如,在本发明实施例中,处理器30可以通过执行存储在存储器31中的指令,根据第一图像和第二图像,获得车辆的信息和车外事物的信息。
存储器31可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如图像处理功能等)等。存储数据区可存储摄像机使用过程中所创建、生成的数据(比如车辆的信息、车外事物的信息)等。
此外,存储器31可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
充电管理模块33用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。
在一些有线充电的实施例中,充电管理模块33可以通过USB接口32接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块33可以通过摄像机的无线充电线圈接收无线充电输入。
充电管理模块33为电池35充电的同时,还可以通过电源管理模块34为摄像机供电。
电源管理模块34用于连接电池35,充电管理模块33以及处理器30。
电源管理模块34接收电池35和/或充电管理模块33的输入,为处理器30,存储器31,摄像头38,显示屏310等供电。电源管理模块34还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块34也可以设置于处理器30中。在另一些实施例中,电源管理模块34和充电管理模块33也可以设置于同一个器件中。
距离传感器36B,用于测量距离。摄像机可以通过红外或激光测量距离。在一些 实施例中,拍摄场景,摄像机可以利用距离传感器36B测距以实现快速对焦。
接近光传感器36C可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。摄像机通过发光二极管向外发射红外光。摄像机使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定摄像机附近有物体。当检测到不充分的反射光时,摄像机可以确定摄像机附近没有物体。
温度传感器36D用于检测温度。
在一些实施例中,摄像机利用温度传感器36D检测的温度,执行温度处理策略。例如,当温度传感器36D上报的温度超过阈值,摄像机执行降低位于温度传感器36D附近的处理器的性能,以便降低功耗实施热保护。
在另一些实施例中,当温度低于另一阈值时,摄像机对电池35加热,以避免低温导致摄像机异常关机。在其他一些实施例中,当温度低于又一阈值时,摄像机对电池35的输出电压执行升压,以避免低温导致的异常关机。
环境光传感器36E用于感知环境光亮度。摄像机可以根据感知的环境光亮度自适应调节显示屏310亮度。环境光传感器36E也可用于拍照时自动调节白平衡。
按键37包括开机键等。按键37可以是机械按键,也可以是触摸式按键。摄像机可以接收按键输入,产生与摄像机的用户设置以及功能控制有关的键信号输入。
摄像头38用于捕获静态图像或视频。
物体通过摄像头38生成光学图像投射到图像传感器36A。图像传感器36A可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。图像传感器36A把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。
在一些实施例中,摄像机可以包括1个或N个摄像头38,N为大于1的正整数。一般的,摄像头和图像传感器之间一一对应。示例性的,本发明实施例中若摄像机包括N个摄像头38,则该摄像机包括N个图像传感器36A。
摄像机通过GPU,显示屏310,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏310和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器30可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏310用于显示图像,视频等。显示屏310包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),量子点发光二极管(quantum dot light emitting diodes,QLED)等。
在一些实施例中,摄像机可以包括1个或N个显示屏310,N为大于1的正整数。例如,在本发明实施例中,显示屏310可用于显示第一图像和第二图像,或者用于显示车辆和车外事物。
摄像机可以通过ISP,摄像头38,视频编解码器,GPU,显示屏310以及应用处理器等实现拍摄功能。
ISP用于处理摄像头38反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头38中。
网络接口39主要用于识别分析结果的上传、图像和数据流的发送,同时,网络接口接收系统工作的配置参数,传递到处理器30。
外设接口311可以连接如目标物体检测器、红灯信号检测器、雷达、ETC天线等外接设备,保证了系统的扩展性。
上述网络接口39和外设接口311均可以称为通信接口。
需要指出的是,图3中示出的设备结构并不构成对摄像机的具体限定。在另一些实施例中,摄像机可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
下面结合图3所示的摄像机对本发明实施例提供的图像处理方法进行描述。其中,下述方法实施例中提及的摄像机可以具有图3所示组成部分,不再赘述。
本发明实施例中的摄像机可以采用同一图像传感器获取曝光时间不同的图像,也可以采用不同图像传感器分别获取曝光时间不同的图像。现在先对摄像机采用同一图像传感器获取曝光时间不同的图像这一情形进行说明。
图4为本发明实施例提供的一种图像处理方法的流程示意图。如图4所示,本发明实施例提供的图像处理方法包括:
S400、摄像机根据第一预设图像参数,在第一时刻采用摄像机中的图像传感器进行拍摄,以获取第一图像。
其中,第一预设图像参数包括第一曝光时间。
曝光时间能够反映在拍照或者摄像过程中,进入摄像机的光的多少。一般情况下,曝光时间越长,进入摄像机的光就越多。曝光时间长适用于光线条件较差的场景中,相反,曝光时间短适用于光线条件较好的场景中。
可选的,第一预设图像参数可以为系统默认参数,也可以为用户根据需求预先设置的,本发明实施例对此不作限定。
在实际应用中,第一预设图像参数还包括第一帧率、第一曝光补偿系数、第一增益或第一快门速度中的至少一个。当然,第一预设图像参数还可以包括背光、白平衡等相关参数,这里不再一一列举。
示例性的,若第一图像为车辆图像,图5示出了摄像机中图像参数的配置界面。如图5所示,在系统默认的情况下,获取第一图像的曝光补偿系数(即上述第一曝光补偿系数)为50,获取第一图像的快门速度(即上述第一快门速度)为1/250秒,获取第一图像的增益(即上述第一增益)为50。当然,用户可以根据实际需求点击相应按钮修改图5示出的每一参数。
S401、摄像机根据第二预设图像参数,在第二时刻采用图像传感器进行拍摄,以获取第二图像。
其中,第二预设图像参数包括第二曝光时间。该第二曝光时间与第一曝光时间不同。
与第一预设图像参数类似,第二预设图像参数可以为系统默认参数,也可以为用户根据需求预先设置的,本发明实施例对此不作限定。
在实际应用中,第二预设图像参数还包括第二帧率、第二曝光补偿系统、第二增益或第二快门速度中的至少一个。当然,第二预设图像参数还可以包括背光、白平衡等相关参数。
示例性的,若第二图像为人体图像,图6示出了摄像机中图像参数的配置界面。如图6所示,在系统默认的情况下,获取第二图像的曝光补偿系数(即上述第二曝光补偿系数)为50,获取第一图像的快门速度(即上述第二快门速度)为1/100秒,获取第二图像的增益(即上述第二增益)为50。当然,用户可以根据实际需求修改图6示出的每一参数。
由于摄像机采用同一图像传感器获取第一图像和第二图像,获取第一图像和第二图像所采用的图像参数不同,因此,摄像机需要在不同时刻获取第一图像和第二图像。
可选的,第一时刻与第二时刻之间的时间差小于预设时长,该预设时长为毫秒级别。例如,预设时长为10毫秒,或者为50毫秒,又或者为100毫秒。这种情况下,可以认为摄像机在同一时刻完成了对同一拍摄场景中不同拍摄对象的拍摄。
示例性的,本发明实施例的摄像机可以在预设时长内获取到违章车辆的图像以及车外事物的图像,且获取到的图像的清晰度较高。这样,有利于后续获取车牌号以及车外事物的信息(如行人的人脸特征),为违章处理中心通报违章车辆提供了有利的证据。此外,摄像机获取到清晰的车辆的图像以及车外事物的图像,还能对公安机关等相关单位侦破案件提供一定帮助。
可选的,本发明实施例的摄像机可以实时采集图像,以获取第一图像和第二图像,也可以在确定车辆的车速超过预设值时采集图像(即在车辆超速的情况下采集图像),以获取第一图像和第二图像,还可以在确定车辆的运动轨迹符合预设曲线时采集图像(如车辆行驶过程中压实线的违章行为),以获取第一图像和第二图像,本发明实施例对此不作限定。
可选的,本发明实施例的摄像机可以对同一拍摄场景进行拍摄,以获取第一图像和第二图像,也可以对不同拍摄场景进行拍摄,以获取第一图像和第二图像,本发明实施例对此不作限定。
例如,摄像机在拍摄角度为A的情况下,10毫秒内获取到了车辆图像和行人图像;或者,摄像机在某一时刻的拍摄角度为A,该摄像机获取到了车辆图像,在另一时刻该摄像机的摄像角度为B,又获取到了行人图像。
需要说明的是,本发明实施例中的摄像机可以先执行S400,后执行S401,也可以先执行S401,后执行S400,本发明实施例对此不作限定。
S402、摄像机根据第一图像和第二图像,获得车辆的信息和车外事物的信息。
其中,车辆包括在第一图像中,车外事物包括在第二图像中。
可选的,车外事物包括行人,动物,车辆之外的非机动车,或者车辆之外的非机动车的驾驶员中的至少一个。当然,车外事物还可以包括其他运行速度较慢或者处于静止状态的物体,如高楼、交通警示牌等,本发明实施例对此不作限定。
若车外事物包括人,则车外事物的信息可以包括人脸特征、性别、年龄段、衣服颜色等,本发明实施例对此不作限定。
本发明实施例中车辆的信息包括车牌号。当然,车辆的信息还可以包括车辆品牌、车身颜色、车辆型号等。
本发明实施例中的摄像机可以采用下述实现方式I和实现方式II,获得车辆的信息和车外事物的信息。
实现方式I:摄像机采用第一预设编码算法,对第一图像进行编码,得到编码后的第一图像,并对编码后的第一图像进行图像检测,之后,摄像机检测编码后的第一图像中是否存在车辆;若编码后的第一图像中存在车辆,则摄像机获得车辆的信息;此外,摄像机采用第二预设编码算法,对第二图像进行编码,得到编码后的第二图像,之后,摄像机检测编码后的第二图像中是否存在车外事物;若编码后的第二图像中存在车外事物,则摄像机获得车外事物的信息。
其中,上述第一预设编码算法和第二预设编码算法均可以为现有技术中任意一种图像的编码算法,例如,预测编码算法、变换编码算法和量化编码算法等,这里不再一一赘述。
具体的,在编码后的第一图像中存在车辆的情况下,摄像机识别车辆的特征,以获取车辆的信息。同理,在编码后的第二图像中存在车外事物的情况下,摄像机识别车外事物的特征,以获得车外事物的信息。
本发明实施例中的摄像机也需要根据相应的检测参数检测是否存在车辆,以及是否存在车外事物。进一步地,摄像机根据检测参数识别车辆的信息,以及车外事物的信息。其中,检测参数为系统默认的或者用户根据实际需求设置的,本发明实施例对此不作限定。
示例性的,车外事物包括人,则车外事物的信息可以包括人脸位置信息(faceRect)、人脸特征点信息(faceRect)、人脸姿态信息。
其中,人脸姿态信息可以包括人面俯仰角度(pitch)、平面内旋转角度(roll)和人面偏航度(即左右旋转角度,yaw)。人面偏航度是指用户的面部朝向相对于“摄像机的摄像头与用户的头部的连线”的左右旋转角度。
在一个示例中,摄像机可以提供一个接口(如Face Detector接口),该接口可以接收摄像头拍摄的第二图像。然后,摄像机的处理器可以对第二图像进行编码以及人脸检测,得到上述人脸的特征。最后,摄像机可以返回检测结果(JSON Object),即上述人脸的特征。
例如,以下为本发明实施例中,摄像机返回的检测结果(JSON)示例。
Figure PCTCN2019089115-appb-000001
Figure PCTCN2019089115-appb-000002
上述代码中,“id”:0表示上述人脸特征对应的人脸ID为0。其中,一张图像(如第一图像)中可以包括一个或多个人脸。摄像机可以分配该一个或多个人脸不同的ID,以标识人脸。
“height”:1795表示人脸(即人脸在第一图像中所在的人脸区域)的高度为1795个像素点。“left”:761表示人脸与第一图像左边界的距离为761个像素点。“top”:1033表示人脸与第一图像上边界的距离为1033个像素点。“width”:1496表示人脸的宽度为1496个像素点。“pitch”:-2.9191732表示人脸ID为0的人脸的人面俯仰角度为-2.9191732°。“roll”:2.732926表示人脸ID为0的人脸的平面内旋转角度为2.732926°。
“yaw”:0.44898167表示人脸ID为0的人脸的人面偏航度(即左右旋转角度)α=0.44898167°。若α=0.44898167°,0.44898167°>0°,则用户的面部朝向相对于摄像头与该用户头部的连线向右旋转0.44898167°。
在另一实施例中,摄像机还可以判断人的眼睛是否睁开。例如,摄像机可以通过以下方法判断人的眼睛是否睁开:摄像机在进行人脸检测时,判断是否采集到人的虹膜信息;如果采集到虹膜信息,则确定人的眼睛睁开;如果没有采集到虹膜信息,则确定人的眼睛没有睁开。当然,还可以采用其他已有技术进行眼睛是否睁开的检测。
本发明实施例中摄像机对编码后的第二图像进行检测,检测编码后的第二图像中是否包括人的方法可以参考常规技术中检测人脸的具体方法,本发明实例不再一一赘述。
摄像机在获取到第一图像和第二图像后,分别对第一图像和第二图像进行编码和图像检测等处理,有效地提高了摄像机的处理效率,而且摄像机分别对第一图像和第 二图像进行图像检测,有效地保证了获取到车辆的信息和车外事物的信息的准确性。
实现方式II:若摄像机对同一拍摄场景进行拍摄,以获取第一图像和第二图像,则摄像机采用预设的融合算法,将第一图像和第二图像融合,生成第三图像,之后,该摄像机采用第三预设编码算法,对第三图像进行编码,得到编码后的第三图像,之后,摄像机检测编码后的第三图像中是否存在车辆和车外事物;若编码后的第三图像中存在车辆和车外事物,则摄像机获得车辆的信息和车外事物的信息。
为了完整、清晰的反映该拍摄场景,摄像机可以采用预设的融合算法将第一图像和第二图像融合为符合配置要求的第三图像。
其中,预设的融合算法可以为现有技术中任意一种图像融合算法,例如,DSP融合算法、最佳缝合线算法等,这里不再一一赘述。
摄像机将第一图像和第二图像融合后,对融合后的图像(即第三图像)进行编码,以及图像检测。其中,摄像机对第三图像进行编码以及图像检测的方法可以参考上述摄像机对第一图像进行编码以及图像检测的方法的描述,这里不再进行详细赘述。
综上,本发明实施例中的一台摄像机采用一个图像传感器即可获取到不同曝光时间的图像,后续,该摄像机可以根据获取到的图像,获得车辆的信息和车外事物的信息,完成了现有技术中多台摄像机的功能。相比于现有技术,本发明实施例提供的方案有效地减少了成本,节省了部署空间。
可选的,若摄像机实时采集图像,则在获取到第一图像和第二图像后,还可以根据相关算法(如预设的用于确定违章车辆的算法)确定违章车辆的信息和违章人员的信息。
进一步地,摄像机在获得车辆的信息和车外事物的信息后,还可以向与该摄像机网络连接的平台(或服务器)发送车辆的信息和车外事物的信息,以便于平台(或服务器)管理员查看车辆的信息和车外事物的信息。结合上述图4,如图7所示,本发明实施例提供的图像处理方法在S402后,还可以包括S701。
S701、摄像机向与该摄像机网络连接的平台(或服务器)发送车辆的信息和/或车外事物的信息。
进一步地,若车辆的信息不满足管理员的需求,则该管理员可以重新调整第一预设图像参数和获得车辆的信息所参考的检测参数。后续,摄像机可以根据重新调整后的参数获取第一图像,以及车辆的信息。
同理,若车外事物的信息不满足用户的需求,则该管理员可以重新调整第二预设图像参数和获得车外事物的信息所参考的检测参数。后续,摄像机可以根据重新调整后的参数获取第二图像,以及车外事物的信息。
如图7所示,本发明实施例提供的图像处理方法还可以包括S702和S703。
S702、摄像机接收与该摄像机网络连接的平台(或服务器)发送的调整指示。
该调整指示用于调整第一预设图像参数、第二预设图像参数、第一检测参数(获得车辆的信息所参考的参数)或第二检测参数(获得车外事物的信息所参考的参数)中的至少一个。
S703、摄像机根据调整指示,调整相应参数,并根据调整后的参数获取图像、车辆的信息以及车外事物的信息。
也就是说,摄像机根据调整后的参数重新执行S400~S402。
可以看出,本发明实施例提供的图像处理方法不仅能够有效地减少了成本,节省部署空间,还能根据管理员的需求实时调整参数,以获得满足管理员需求的图像和对象的信息。
本发明实施例中的摄像机还可以采用不同图像传感器分别获取曝光时间不同的图像。现在对这一情形进行说明。
图8为本发明实施例提供的另一种图像处理方法的流程示意图。如图8所示,本发明实施例提供的图像处理方法包括:
S800、摄像机根据第一预设图像参数,在第一时刻采用摄像机中的第一图像传感器进行拍摄,以获取第一图像。
S800可以参考上述S400的描述,这里不再进行详细赘述。
S801、摄像机根据第二预设图像参数,在第二时刻采用摄像机中的第二图像传感器进行拍摄,以获取第二图像。
S801可以参考上述S401的描述,这里不再进行详细赘述。
由于摄像机采用不同的图像传感器获取图像,因此,本发明实施例中的摄像机可以先执行S800,后执行S801,也可以先执行S801,后执行S800,还可以同时执行S400和S401,本发明实施例对此不作限定。
S802、摄像机根据第一图像和第二图像,获得车辆的信息和车外事物的信息。
S802可以参考上述S402的描述,这里不再进行详细赘述。
综上,本发明实施例中的一台摄像机可获取到不同曝光时间的图像,后续,该摄像机可以根据获取到的图像,获得车辆的信息和车外事物的信息,完成了现有技术中多台摄像机的功能。相比于现有技术,本发明实施例提供的方案有效地减少了成本,节省了部署空间。
进一步地,摄像机在获得车辆的信息和车外事物的信息后,还可以向与该摄像机网络连接的平台(或服务器)发送车辆的信息和车外事物的信息,以便于管理员查看车辆的信息和车外事物的信息。结合上述图8,如图9所示,本发明实施例提供的图像处理方法在S802后,还可以包括S901。
S901、摄像机向与该摄像机网络连接的平台(或服务器)发送车辆的信息和/或车外事物的信息。
进一步地,若车辆的信息不满足管理员的需求,则该管理员可以重新调整第一预设图像参数和获得车辆的信息所参考的检测参数。后续,摄像机可以根据重新调整后的参数获取第一图像,以及车辆的信息。
同理,若车外事物的信息不满足用户的需求,则该管理员可以重新调整第二预设图像参数和获得车外事物的信息所参考的检测参数。后续,摄像机可以根据重新调整后的参数获取第二图像,以及车外事物的信息。
如图9所示,本发明实施例提供的图像处理方法还可以包括S902和S903。
S902、摄像机接收与该摄像机网络连接的平台(或服务器)发送的调整指示。
该调整指示用于调整第一预设图像参数、第二预设图像参数、第一检测参数(获得车辆的信息所参考的参数)或第二检测参数(获得车外事物的信息所参考的参数) 中的至少一个。
S903、摄像机根据调整指示,调整相应参数,并根据调整后的参数获取图像、车辆的信息以及车外事物的信息。
可以看出,本发明实施例提供的图像处理方法不仅能够有效地减少了成本,节省部署空间,还能根据管理员的需求实时调整参数,以获得满足管理员需求的图像和对象的信息。
上述主要从方法的角度对本发明实施例提供的方案进行了介绍。为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本发明能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
本发明实施例可以根据上述方法示例对上述服务节点等进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本发明实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
如图10所示,为本发明实施例提供的一种摄像机的结构示意图。图10所示的摄像机100可以应用于道路交通监控场景。该摄像机100可以用于执行上文提供的任一种图像处理方法中摄像机执行的步骤。
摄像机100可以包括:采集单元1001和处理单元1002。其中,采集单元1001,用于获取第一图像和第二图像。处理单元1002,用于获得车辆的信息和车外事物的信息。示例性的,采集单元1001可以用于执行S400、S401、S800、S801。处理单元1002可以用于执行S402、S802、S703、S903。
可选的,摄像机还包括发送单元1003和接收单元1004。发送单元1003,用于发送车辆的信息和车外事物的信息。接收单元1004,用于接收调整指示。示例性的,发送单元1003可以用于执行S701、S901。接收单元1004可以用于执行S702、S902。
作为一个示例,结合图3,摄像机100中的接收单元1004和发送单元1003可以对应图3中的网络接口39或外设接口311,处理单元1002可以对应图3中的处理器30,采集单元1001可以对应图3中的图像传感器36A。
本实施例中相关内容的解释可参考上述方法实施例,此处不再赘述。
本发明另一实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当指令在摄像机上运行时,该摄像机执行上述方法实施例所示的方法流程中摄像机执行的各个步骤。
在本发明的另一实施例中,还提供一种计算机程序产品,该计算机程序产品包括计算机执行指令,该计算机执行指令存储在计算机可读存储介质中;摄像机的至少一个处理器可以从计算机可读存储介质读取该计算机执行指令,至少一个处理器执行该计算机执行指令使得摄像机实施执行执行上述方法实施例所示的方法流程中摄像机执 行的各个步骤。
在上述实施例中,可以全部或部分的通过软件,硬件,固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式出现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本发明实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。
计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据终端。该可用介质可以是磁性介质,(例如,软盘,硬盘、磁带)、光介质(例如,DVD)或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何在本发明揭露的技术范围内的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (14)

  1. 一种图像处理方法,其特征在于,应用于包括摄像机的道路交通监控场景,所述图像处理方法包括:
    根据第一预设图像参数,在第一时刻采用所述摄像机中的图像传感器进行拍摄,以获取第一图像,所述第一预设图像参数包括第一曝光时间;
    根据第二预设图像参数,在第二时刻采用所述图像传感器进行拍摄,以获取第二图像,所述第二预设图像参数包括第二曝光时间,所述第二曝光时间与所述第一曝光时间不同;
    根据所述第一图像和所述第二图像,获得车辆的信息和车外事物的信息,所述第一图像中存在所述车辆,所述第二图像中存在所述车外事物。
  2. 一种图像处理方法,其特征在于,应用于包括摄像机的道路交通监控场景,所述图像处理方法包括:
    根据第一预设图像参数,在第一时刻采用所述摄像机中的第一图像传感器进行拍摄,以获取第一图像,所述第一预设图像参数包括第一曝光时间;
    根据第二预设图像参数,在第二时刻采用所述摄像机中的第二图像传感器进行拍摄,以获取第二图像,所述第二图像传感器与所述第一图像传感器不同,所述第二预设图像参数包括第二曝光时间,所述第二曝光时间与所述第一曝光时间不同;
    根据所述第一图像和所述第二图像,获得车辆的信息和车外事物的信息,所述第一图像中存在所述车辆,所述第二图像中存在所述车外事物。
  3. 根据权利要求1或2所述的图像处理方法,其特征在于,所述根据所述第一图像和所述第二图像,获得车辆的信息和车外事物的信息,包括:
    采用第一预设编码算法,对所述第一图像进行编码,得到编码后的第一图像;
    检测所述编码后的第一图像中是否存在车辆;
    在所述编码后的第一图像中存在车辆的情况下,获得所述车辆的信息;
    采用第二预设编码算法,对所述第二图像进行编码,得到编码后的第二图像;
    检测所述编码后的第二图像中是否存在车外事物;
    在所述编码后的第二图像中存在车外事物的情况下,获得所述车外事物的信息。
  4. 根据权利要求1或2所述的图像处理方法,其特征在于,所述第一图像和所述第二图像为所述摄像机对相同拍摄场景进行拍摄得到的;所述根据所述第一图像和所述第二图像,获得车辆的信息和车外事物的信息,包括:
    采用预设的融合算法,将所述第一图像和所述第二图像融合,生成第三图像;
    采用第三预设编码算法,对所述第三图像进行编码,得到编码后的第三图像;
    检测所述编码后的第三图像是否存在车辆和车外事物;
    在所述编码后的第三图像中存在车辆和车外事物的情况下,获得所述车辆的信息和所述车外事物的信息。
  5. 根据权利要求1-4中任意一项所述的图像处理方法,其特征在于,
    所述第一预设图像参数还包括第一帧率、第一曝光补偿系数、第一增益或第一快门速度中的至少一个;
    所述第二预设图像参数还包括第二帧率、第二曝光补偿系统、第二增益或第二快 门速度中的至少一个。
  6. 根据权利要求1-5中任意一项所述的图像处理方法,其特征在于,
    所述车辆的信息包括车牌号;
    所述车外事物包括行人,动物,所述车辆之外的非机动车,或者所述车辆之外的非机动车的驾驶员中的至少一个。
  7. 一种摄像机,其特征在于,应用于道路交通监控场景,所述摄像机包括:
    采集单元,用于根据第一预设图像参数,在第一时刻采用所述摄像机中的图像传感器进行拍摄,以获取第一图像,所述第一预设图像参数包括第一曝光时间,以及用于根据第二预设图像参数,在第二时刻采用所述图像传感器进行拍摄,以获取第二图像,所述第二预设图像参数包括第二曝光时间,所述第二曝光时间与所述第一曝光时间不同;
    处理单元,用于根据所述采集单元获取到的所述第一图像和所述第二图像,获得车辆的信息和车外事物的信息,所述第一图像中存在所述车辆,所述第二图像中存在所述车外事物。
  8. 一种摄像机,其特征在于,应用于道路交通监控场景,所述摄像机包括:
    采集单元,用于根据第一预设图像参数,在第一时刻采用所述摄像机中的第一图像传感器进行拍摄,以获取第一图像,以及根据第二预设图像参数,在第二时刻采用所述摄像机中的第二图像传感器进行拍摄,以获取第二图像,所述第二图像传感器与所述第一图像传感器不同,所述第一图像的曝光时间与所述第二图像的曝光时间不同;
    处理单元,用于根据所述采集单元获取到的所述第一图像和所述第二图像,获得车辆的信息和车外事物的信息,所述第一图像中存在所述车辆,所述第二图像中存在所述车外事物。
  9. 根据权利要求7或8所述的摄像机,其特征在于,所述处理单元具体用于:
    采用第一预设编码算法,对所述第一图像进行编码,得到编码后的第一图像;
    检测所述编码后的第一图像中是否存在车辆;
    在所述编码后的第一图像中存在车辆的情况下,获得所述车辆的信息;
    采用第二预设编码算法,对所述第二图像进行编码,得到编码后的第二图像;
    检测所述编码后的第二图像中是否存在车外事物;
    在所述编码后的第二图像中存在车外事物的情况下,获得所述车外事物的信息。
  10. 根据权利要求7或8所述的摄像机,其特征在于,所述第一图像和所述第二图像为所述采集单元对相同拍摄场景进行拍摄得到的;所述处理单元具体用于:
    采用预设的融合算法,将所述第一图像和所述第二图像融合,生成第三图像;
    采用第三预设编码算法,对所述第三图像进行编码,得到编码后的第三图像;
    检测所述编码后的第三图像是否存在车辆和车外事物;
    在所述编码后的第三图像中存在车辆和车外事物的情况下,获得所述车辆的信息和所述车外事物的信息。
  11. 根据权利要求7-10中任意一项所述的摄像机,其特征在于,
    所述第一预设图像参数还包括第一帧率、第一曝光补偿系数、第一增益或第一快门速度中的至少一个;
    所述第二预设图像参数还包括第二帧率、第二曝光补偿系统、第二增益或第二快门速度中的至少一个。
  12. 根据权利要求7-11中任意一项所述的摄像机,其特征在于,
    所述车辆的信息包括车牌号;
    所述车外事物包括行人,动物,所述车辆之外的非机动车,或者所述车辆之外的非机动车的驾驶员中的至少一个。
  13. 一种摄像机,其特征在于,所述摄像机包括:一个或多个处理器,以及存储器;
    所述存储器与所述一个或多个处理器耦合;所述存储器用于存储计算机程序代码,所述计算机程序代码包括指令,当所述一个或多个处理器执行所述指令时,所述摄像机执行如权利要求1-6中任意一项所述的图像处理方法。
  14. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在摄像机上运行时,使得所述摄像机执行如权利要求1-6中任意一项所述的图像处理方法。
PCT/CN2019/089115 2019-05-29 2019-05-29 一种图像处理方法及装置 WO2020237542A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/089115 WO2020237542A1 (zh) 2019-05-29 2019-05-29 一种图像处理方法及装置
CN201980070008.6A CN112889271B (zh) 2019-05-29 2019-05-29 一种图像处理方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/089115 WO2020237542A1 (zh) 2019-05-29 2019-05-29 一种图像处理方法及装置

Publications (1)

Publication Number Publication Date
WO2020237542A1 true WO2020237542A1 (zh) 2020-12-03

Family

ID=73553072

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089115 WO2020237542A1 (zh) 2019-05-29 2019-05-29 一种图像处理方法及装置

Country Status (2)

Country Link
CN (1) CN112889271B (zh)
WO (1) WO2020237542A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040091A (zh) * 2021-09-28 2022-02-11 北京瞰瞰智能科技有限公司 图像处理方法、摄像系统以及车辆
WO2024007428A1 (zh) * 2022-07-04 2024-01-11 天津鲁天教育科技有限公司 线上管理用多摄像头、夹持器一体装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856764A (zh) * 2012-11-30 2014-06-11 浙江大华技术股份有限公司 一种利用双快门进行监控的装置
CN104144325A (zh) * 2014-07-08 2014-11-12 北京汉王智通科技有限公司 一种监控方法及监控设备
US20150163390A1 (en) * 2013-12-10 2015-06-11 Samsung Techwin Co., Ltd. Method and apparatus for recognizing information in an image
CN104883511A (zh) * 2015-06-12 2015-09-02 联想(北京)有限公司 图像处理方法以及电子设备
FR3048104B1 (fr) * 2016-02-19 2018-02-16 Hymatom Procede et dispositif de capture d'images d'un vehicule

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5986461B2 (ja) * 2012-09-07 2016-09-06 キヤノン株式会社 画像処理装置及び画像処理方法、プログラム、並びに記憶媒体
US9277132B2 (en) * 2013-02-21 2016-03-01 Mobileye Vision Technologies Ltd. Image distortion correction of a camera with a rolling shutter
JP2015033107A (ja) * 2013-08-07 2015-02-16 ソニー株式会社 画像処理装置および画像処理方法、並びに、電子機器
CN105227823A (zh) * 2014-06-03 2016-01-06 维科技术有限公司 移动终端的拍摄方法及其装置
CN108961169A (zh) * 2017-05-22 2018-12-07 杭州海康威视数字技术股份有限公司 监控抓拍方法及装置
CN109309792B (zh) * 2017-07-26 2020-12-25 比亚迪股份有限公司 车载摄像头的图像处理方法、装置及车辆
CN107395997A (zh) * 2017-08-18 2017-11-24 维沃移动通信有限公司 一种拍摄方法及移动终端
CN109640032B (zh) * 2018-04-13 2021-07-13 河北德冠隆电子科技有限公司 基于人工智能多要素全景监控检测五维预警系统
CN109167931B (zh) * 2018-10-23 2021-04-13 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及移动终端
CN109688335A (zh) * 2018-12-04 2019-04-26 珠海格力电器股份有限公司 摄像头的控制方法和装置、终端的解锁方法和装置、手机
CN109547701B (zh) * 2019-01-04 2021-07-09 Oppo广东移动通信有限公司 图像拍摄方法、装置、存储介质及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856764A (zh) * 2012-11-30 2014-06-11 浙江大华技术股份有限公司 一种利用双快门进行监控的装置
US20150163390A1 (en) * 2013-12-10 2015-06-11 Samsung Techwin Co., Ltd. Method and apparatus for recognizing information in an image
CN104144325A (zh) * 2014-07-08 2014-11-12 北京汉王智通科技有限公司 一种监控方法及监控设备
CN104883511A (zh) * 2015-06-12 2015-09-02 联想(北京)有限公司 图像处理方法以及电子设备
FR3048104B1 (fr) * 2016-02-19 2018-02-16 Hymatom Procede et dispositif de capture d'images d'un vehicule

Also Published As

Publication number Publication date
CN112889271A (zh) 2021-06-01
CN112889271B (zh) 2022-06-07

Similar Documents

Publication Publication Date Title
CN108419023B (zh) 一种生成高动态范围图像的方法以及相关设备
US11790504B2 (en) Monitoring method and apparatus
CN109005366A (zh) 摄像模组夜景摄像处理方法、装置、电子设备及存储介质
WO2021258321A1 (zh) 一种图像获取方法以及装置
WO2017049922A1 (zh) 一种图像信息采集装置、图像采集方法及其用途
US20230319395A1 (en) Service processing method and device
WO2021109620A1 (zh) 一种曝光参数的调节方法及装置
WO2019148978A1 (zh) 图像处理方法、装置、存储介质及电子设备
US20230360254A1 (en) Pose estimation method and related apparatus
US9258481B2 (en) Object area tracking apparatus, control method, and program of the same
WO2022141445A1 (zh) 一种图像处理方法以及装置
CN104134352A (zh) 基于长短曝光结合的视频车辆特征检测系统及其检测方法
CN116582741B (zh) 一种拍摄方法及设备
CN108093158B (zh) 图像虚化处理方法、装置、移动设备和计算机可读介质
CN112165573A (zh) 拍摄处理方法和装置、设备、存储介质
CN109618102B (zh) 对焦处理方法、装置、电子设备及存储介质
WO2020237542A1 (zh) 一种图像处理方法及装置
WO2022141333A1 (zh) 一种图像处理方法以及装置
WO2022141351A1 (zh) 一种视觉传感器芯片、操作视觉传感器芯片的方法以及设备
WO2022083325A1 (zh) 拍照预览方法、电子设备以及存储介质
CN117201930B (zh) 一种拍照方法和电子设备
WO2021185374A1 (zh) 一种拍摄图像的方法及电子设备
WO2021078145A1 (zh) 基于活体感应运动趋势检测的无线传感人脸识别装置
US20230419505A1 (en) Automatic exposure metering for regions of interest that tracks moving subjects using artificial intelligence
CN204883859U (zh) 一种智能防盗行车记录仪

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19930866

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19930866

Country of ref document: EP

Kind code of ref document: A1