CN110780982A - Image processing method, device and equipment - Google Patents

Image processing method, device and equipment Download PDF

Info

Publication number
CN110780982A
CN110780982A CN201910667544.6A CN201910667544A CN110780982A CN 110780982 A CN110780982 A CN 110780982A CN 201910667544 A CN201910667544 A CN 201910667544A CN 110780982 A CN110780982 A CN 110780982A
Authority
CN
China
Prior art keywords
subtask
operator
detection object
image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910667544.6A
Other languages
Chinese (zh)
Inventor
周飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mai Technology Co Ltd
Original Assignee
Shenzhen Mai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mai Technology Co Ltd filed Critical Shenzhen Mai Technology Co Ltd
Publication of CN110780982A publication Critical patent/CN110780982A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Abstract

The invention discloses an image processing method, a device and equipment, wherein an operator library is pre-configured in the equipment and comprises a plurality of operators; the method comprises the following steps: determining each subtask corresponding to the image to be processed and the execution sequence among the subtasks; sequentially calling an operator corresponding to each subtask according to the execution sequence, and processing the image to be processed; according to the scheme, different subtasks can be set for the image, the image can be processed in various forms by utilizing the operators, namely, the subtasks are executed, namely, the device can process the image in different scenes, and the universality of the device is improved.

Description

Image processing method, device and equipment
Technical Field
The present invention relates to the field of industrial machine vision, and in particular, to an image processing method, apparatus, and device.
Background
Currently, some electronic devices have an image processing function. For example, a snapshot machine arranged in an attendance scene can perform face recognition, a camera arranged in a traffic scene can perform license plate recognition, and an industrial camera arranged in a logistics scene can recognize information such as color and size of an object.
However, the algorithm configured in these electronic devices is fixed, and devices set in different scenes cannot be used in common, for example, a camera used for face recognition cannot be used for license plate recognition. Therefore, the universality of the existing image processing equipment is poor.
Disclosure of Invention
In view of the above, the present invention provides an image processing method, an image processing apparatus and an image processing device, so as to improve the versatility of the image processing device.
Based on the above object, an embodiment of the present invention provides an image processing method, including:
acquiring an image to be processed;
determining each subtask corresponding to the image to be processed and an execution sequence among the subtasks;
and sequentially calling operators corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the image to be processed.
Optionally, after the acquiring the image to be processed, the method further includes:
identifying each detection object in the image to be processed;
the determining each subtask corresponding to the image to be processed and the execution sequence among each subtask includes:
respectively determining each subtask corresponding to each detection object; determining an execution order between each subtask;
the step of sequentially calling an operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the image to be processed includes:
and sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the detection object.
Optionally, after identifying each detection object in the image to be processed, the method further includes:
aiming at each detection object, determining a filtering mode corresponding to the detection object;
filtering the detection object by using the determined filtering mode;
the step of sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the detection object includes:
and sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the filtered detection object.
Optionally, the determining, for each detection object, a filtering manner corresponding to the detection object includes:
displaying interpretation information corresponding to each filtering mode; determining a filtering mode selected by a user for each detection object;
or, determining a filtering mode corresponding to each detection object according to a preset corresponding relation between the object type and the filtering mode;
or, displaying the filtering result of each filtering mode on the detection object; and determining the filtering mode selected by the user for each detection object.
Optionally, the sequentially calling the operator corresponding to each subtask to process the detected object includes any one or more of the following steps:
if the detected object is a graphic code, calling a graphic code scanning operator to scan the detected object;
calling a positioning operator to determine the position of the detection object;
calling a color recognition operator to recognize the color of the detection object;
calling a size identification operator to identify the size of the detection object;
and calling a model matching operator to judge whether the detection object is matched with a preset model.
Optionally, the determining each subtask corresponding to the image to be processed and the execution sequence between each subtask includes:
displaying a plurality of first-level subtasks in the interactive interface;
determining a first-level subtask selected by a user as a first-level subtask to be executed;
displaying a next-level subtask of the first-level subtask to be executed;
and determining the next-level subtask selected by the user as the next-level subtask to be executed.
In view of the above object, an embodiment of the present invention further provides an image processing apparatus, including:
the acquisition module is used for acquiring an image to be processed;
the first determining module is used for determining each subtask corresponding to the image to be processed and the execution sequence among the subtasks;
and the calling module is used for sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the image to be processed.
Optionally, the apparatus further comprises:
the identification module is used for identifying each detection object in the image to be processed;
the first determining module is specifically configured to: respectively determining each subtask corresponding to each detection object; determining an execution order between each subtask;
the calling module is specifically configured to: and sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the detection object.
Optionally, the apparatus further comprises:
the second determining module is used for determining a filtering mode corresponding to each detection object;
the filtering module is used for filtering the detection object by utilizing the determined filtering mode;
the calling module is specifically configured to:
and sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the filtered detection object.
Optionally, the second determining module is specifically configured to:
displaying interpretation information corresponding to each filtering mode; determining a filtering mode selected by a user for each detection object;
or, determining a filtering mode corresponding to each detection object according to a preset corresponding relation between the object type and the filtering mode;
or, displaying the filtering result of each filtering mode on the detection object; and determining the filtering mode selected by the user for each detection object.
Optionally, the calling module is configured to perform any one or more of the following steps:
if the detected object is a graphic code, calling a graphic code scanning operator to scan the detected object;
calling a positioning operator to determine the position of the detection object;
calling a color recognition operator to recognize the color of the detection object;
calling a size identification operator to identify the size of the detection object;
and calling a model matching operator to judge whether the detection object is matched with a preset model.
Optionally, the first determining module is specifically configured to:
displaying a plurality of first-level subtasks in the interactive interface;
determining a first-level subtask selected by a user as a first-level subtask to be executed;
displaying a next-level subtask of the first-level subtask to be executed;
and determining the next-level subtask selected by the user as the next-level subtask to be executed.
In view of the above object, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements any one of the image processing methods when executing the program.
Optionally, the processor includes: the system comprises a field programmable gate array, a central processing unit and a digital signal processing chip;
the field programmable gate array is used for determining processing logic of the image; calling the central processing unit based on the processing logic to perform feature extraction on the image to obtain feature data; and calling the digital signal processing chip to perform operation processing on the characteristic data.
By applying the embodiment of the invention, an operator library is configured in the equipment in advance, and the operator library comprises a plurality of operators; determining each subtask corresponding to the image to be processed and the execution sequence among the subtasks; sequentially calling an operator corresponding to each subtask according to the execution sequence, and processing the image to be processed; according to the scheme, different subtasks can be set for the image, the image can be processed in various forms by utilizing the operators, namely, the subtasks are executed, namely, the device can process the image in different scenes, and the universality of the device is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a camera according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an image processing flow according to an embodiment of the present invention;
FIG. 4 is a schematic view of another image processing flow according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
In order to solve the foregoing technical problems, embodiments of the present invention provide an image processing method, an image processing apparatus, and an image processing device, where the method and the apparatus may be applied to various electronic devices, such as an image capturing device, a digital industrial camera, and other image capturing devices, or may also be other devices in communication connection with the image capturing device, and are not limited in particular. The following first describes an image processing method provided by an embodiment of the present invention.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention, including:
s101: and acquiring an image to be processed.
If the electronic equipment (execution main body, hereinafter referred to as the equipment) executing the scheme is image acquisition equipment, the equipment can acquire the image to be processed by the equipment; if the equipment is in communication connection with the image acquisition equipment, the equipment can acquire the image acquired by the image acquisition equipment as the image to be processed.
For example, in the case of a digital industrial camera, the camera may be triggered to capture an image when the material on a production line (or a logistics line, the same below) is detected to reach a specified position. For example, a photoelectric sensor can be arranged on a production line, the photoelectric sensor judges whether a shielding object exists or not, if so, the material on the production line reaches a specified position, the photoelectric sensor triggers a digital industrial camera to align the material for image acquisition, and the acquired image is used as an image to be processed.
S102: and determining each subtask corresponding to the image to be processed and the execution sequence among the subtasks.
For example, the device may present an interactive interface to a user, and the user selects each subtask corresponding to the image to be processed and an execution sequence between each subtask. For example, a user may select to scan a graphic code in an image, determine a position of a material in the image, and identify a color of the material in the image, where scanning the graphic code, determining the position, and identifying the color are three subtasks. The graphic code may be a bar code, a two-dimensional code, and the like, and is not limited specifically. For another example, the user may select to identify the number of the materials in the image, and determine the positions of the materials in the image if the number is greater than one. For another example, the user may select to identify the size of the material in the image first, and then determine whether the size of the material meets the tolerance condition, in which case, the identification of the size and the tolerance determination are two subtasks.
The user can determine the subtasks and the sequence thereof to be executed according to actual conditions. For example, when face recognition is performed in an attendance scene, each subtask may include: feature extraction and model matching, when license plate recognition is carried out in a traffic scene, each subtask can comprise: feature extraction and character recognition, in a logistics scene, each subtask may include: scanning graphic codes, determining locations, identifying colors, etc., are not to be enumerated. Face recognition can be understood as an overall task, with feature extraction and model matching being part of the task, and is therefore referred to as a subtask. Similarly, license plate recognition can be understood as an overall task, with feature extraction and character recognition being part of the task, and thus referred to as a subtask. Other scenarios are similar and are not described one by one.
In one embodiment, after S101, each detection object in the image to be processed may be identified; thus, S102 may include: respectively determining each subtask corresponding to each detection object; the execution order between each subtask is determined.
For example, in the present embodiment, roi (region of interest) detection may be performed on an image, and each detected image region may be used as one detection object. Alternatively, each detection object in the image may be determined by manually drawing a line. Taking a logistics scene as an example, a camera acquires images of goods, wherein the acquired images comprise the goods, graphic codes pasted on the goods and some background areas; and performing ROI detection on the image, identifying goods and graphic codes in the image, recording the goods as a detection object 1, and recording the graphic codes as a detection object 2.
The user can set the subtasks corresponding to the two detection objects, for example, the subtasks set for the detection object 1 are: identifying the color and determining the size, the subtasks identified for the detection object 2 are: and (6) scanning. The user may also set the execution order of the three subtasks, for example, the execution order may be: scanning the graphic code, identifying the color and determining the size.
In one embodiment, S102 may include: displaying a plurality of first-level subtasks in the interactive interface; determining a first-level subtask selected by a user as a first-level subtask to be executed; displaying a next-level subtask of the first-level subtask to be executed; and determining the next-level subtask selected by the user as the next-level subtask to be executed.
In this embodiment, the subtasks may have a hierarchical relationship, for example, when performing face recognition, feature extraction is usually performed first, and then model matching is performed, so that the subtask of model matching is the next level of the subtask of feature extraction. For another example, when license plate recognition is performed, feature extraction is usually performed first, and then character recognition is performed. For another example, when detecting a material on a production line, the size of the material in the image is usually identified, and then it is determined whether the size of the material meets a tolerance condition.
Two first-level subtasks of 'feature extraction' and 'material size recognition' can be displayed in the interactive interface. If the first-level subtask selected by the user is "feature extraction", the next-level subtask "model matching" and "character recognition" of "feature extraction" can be continuously presented, and assuming that the user selects "model matching", the subtasks and the execution sequence thereof determined in S102 are: "feature extraction" and "model matching". If the first-level subtask selected by the user is "identify material size", the tolerance determination "of the next-level subtask of" identify material size "can be continuously presented, and the specific situations are similar and are not repeated.
S103: and according to the execution sequence, sequentially calling the operator corresponding to each subtask in a pre-configured operator library to process the image to be processed.
An operator library is configured in advance, and the operator library may include a plurality of operators, such as a geometric outline extraction operator, a BLOB (Binary Large Object, which is a container capable of storing Binary files), an analysis operator, a gray histogram generation operator, an edge width operator, a color recognition operator, a model matching operator, a positioning operator, a tolerance determination operator, and the like, which are not listed one by one.
And the operators and the subtasks have corresponding relations. For example, the sub-task of "determining the position of an object (material, goods, etc.)" may correspond to a positioning operator, the sub-task of "identifying the color of an object (material, goods, etc.)" may correspond to a color identifying operator, and the sub-task of "model matching" may correspond to a model matching operator, and the like, which is not limited specifically. After the subtasks and the execution sequence thereof are determined, the corresponding operators can be called to execute the corresponding subtasks according to the corresponding relationship between the subtasks and the operators.
In one embodiment, the processing of the detection object by the calling operator may include any one or more of the following steps:
if the detected object is a graphic code, calling a graphic code scanning operator to scan the detected object;
calling a positioning operator to determine the position of the detection object;
calling a color recognition operator to recognize the color of the detection object;
calling a size identification operator to identify the size of the detection object;
and calling a model matching operator to judge whether the detection object is matched with a preset model.
As described above, the operator may be of various types, and may also invoke a geometric contour extraction operator, extract a geometric contour of the detected object, invoke a BLOB analysis operator, analyze a binary value of the detected object, invoke a gray histogram generation operator, generate a gray histogram corresponding to the detected object, invoke an edge width operator, calculate an edge width of the detected object, invoke a tolerance determination operator, determine whether a size of the detected object satisfies a tolerance condition, and the like, which are not listed one by one.
In the above one embodiment, S101 is followed by identifying each detection object in the image to be processed, and in this embodiment, S103 may include: and sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the detection object.
For example, assume that two subtasks are determined in S102, and the execution sequence is to execute the subtask a first and then execute the subtask B, and assume that the subtask a corresponds to three operators: operator 1, operator 2 and operator 3, and the subtask B corresponds to two operators: operator 4 and operator 5. In one case, operator 1, operator 2, and operator 3 may be called to execute sub-task a, and after the execution of sub-task a is completed, operator 4 and operator 5 may be called to execute sub-task B. In another case, it is assumed that, in the process of executing task a, operator 1 is called first, then operator 2 is called, and finally operator 3 is called, so that sub-task a may be executed without waiting for completion of execution of sub-task a, for example, after operator 1 is called, operator 2 is called to continue to execute sub-task a, and operator 4 is called to start execution of sub-task B. The sequence for calling each operator can be set according to the actual situation, and is not limited specifically.
In one embodiment, after each detection object in the image to be processed is identified, a filtering mode corresponding to each detection object may also be determined for each detection object; filtering the detection object by using the determined filtering mode; thus, S103 may include: and sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the filtered detection object.
In the embodiment, the filtering mode suitable for different detection objects is selected, so that a better filtering effect can be achieved.
As an embodiment, the filtering method corresponding to each detection object may be determined according to a preset correspondence relationship between the object type and the filtering method.
The filtering modes include a plurality of modes, such as median filtering, limiting average filtering, gaussian filtering, and the like, and the applicable scenes of the filtering modes are different, for example, the median filtering is more suitable for filtering point-like noise, the limiting average filtering is more suitable for filtering particle noise, and the gaussian filtering is more suitable for filtering gaussian noise. The filtering method applicable to various detection objects can be predetermined according to the characteristics of the filtering method, that is, the corresponding relation between the object type and the filtering method is configured in advance. Therefore, after the detection object in the image to be processed is identified, the corresponding filtering mode can be selected according to the type of the detection object to carry out filtering processing on the detection object.
As another embodiment, interpretation information corresponding to each filtering mode may be presented; and determining the filtering mode selected by the user for each detection object. The interpretation information may include information about advantages and disadvantages of the filtering method, an applicable scene, and the like, and the interpretation information is helpful for a user to select the filtering method applicable to the detection object.
As another embodiment, the filtering result of each filtering mode on the detection object may be presented; and determining the filtering mode selected by the user for each detection object.
In this embodiment, the filtering results of different filtering modes on the detection object can be displayed in the interactive interface with the user, and the user selects the filtering mode according to the displayed filtering results. In one case, the user can adjust the filtering parameters through the interactive interface, and the adjustment result is displayed in the interactive interface in real time, so that the user can conveniently determine a more appropriate filtering mode through visual display.
In one embodiment, the user may also set the output format, timing, etc. of the processing results, and may also set the output of the processing results to different interactive devices. For example, the device may be connected to a computer through a network port, the device may be connected to a PLC (Programmable Logic Controller) through a serial port, and the like, and the specific communication interface and the interactive device are not limited, and a user may set to send a processing result to different interactive devices. After the processing result of the image to be processed is obtained in S103, the processing result may be sent to the interactive device according to the format when the output format, the timing, and the interactive device set by the user reach the timing.
By applying the embodiment of the invention, an operator library is configured in the equipment in advance, and the operator library comprises a plurality of operators; determining each subtask corresponding to the image to be processed and the execution sequence among the subtasks; sequentially calling an operator corresponding to each subtask according to the execution sequence, and processing the image to be processed; according to the scheme, different subtasks can be set for the image, the image can be processed in various forms by utilizing the operators, namely, the subtasks are executed, namely, the device can process the image in different scenes, and the universality of the device is improved.
A specific embodiment is described below with reference to fig. 2-4. In this embodiment, the executing body may be an embedded Digital industrial smart camera based on DSP (Digital Signal Processing), and as shown in fig. 2, the camera may include a hardware portion and a software portion:
the hardware part is embedded into the DSP chip, and the hardware part can comprise a three-level architecture: an FPGA (Field-programmable gate Array), a CPU (Central Processing Unit), and a DSP; the FPGA is used for logic processing, the CPU is used for data calculation, and the DSP is used for image processing. The hardware part further includes I2C (Inter-Integrated Circuit), CMOS (Complementary Metal oxide semiconductor) sensor, Ethernet (Ethernet) interface, I/O (Input/Output), RS232 interface (asynchronous transfer standard interface), SDRAM (Synchronous dynamic random Access Memory), FLASH Memory, Micro SD (FLASH Memory card), system clock, and real-time clock.
The software part comprises C + + algorithm particles, an integrated operator, a software framework, a communication framework and a UI (user interface); wherein, the C + + algorithm particle is the above operators; the integration operators can be understood as modules for integrating the operators, and the operators are integrated to process the image; the software framework can be understood as: installing software in the camera, and implementing the scheme through the software; the communication framework can be understood as: the camera is connected with various interactive devices through different communication interfaces, for example, in the above contents, the camera is connected with a computer through a network port, connected with a PLC through a serial port, and the like; the UI is an interface provided by the camera for interacting with the user.
One flow of image processing based on the camera may be as shown in fig. 3, including:
s301: when the material to be detected is in place, the change signal generated by the detection of photoelectricity is given out.
S302: the camera takes an image and transmits it to the software configured in the camera.
For example, the camera may be triggered to capture an image in the event that the material is detected to reach the specified location. For example, a photoelectric sensor can be arranged, the photoelectric sensor detects a photoelectric signal to generate a change signal, the change signal is generated to indicate that the material reaches a specified position, and the photoelectric sensor triggers the digital industrial camera to aim at the material to acquire an image.
S303: and loading the image by software for preprocessing.
The preprocessing may include ROI detection, and each detected image region may be regarded as a detection object, respectively. The user can set the subtasks corresponding to each detection object respectively.
S304: and (5) carrying out feature extraction and marking by an operator.
An operator library is pre-configured in the camera, and the operator library can comprise a plurality of operators, namely the C + + algorithm particles. For example, a geometric contour extraction operator, a BLOB analysis operator, a gray histogram generation operator, an edge width operator, a color recognition operator, a model matching operator, a positioning operator, a tolerance determination operator, etc., are not listed.
And the operators and the subtasks have corresponding relations. After the subtasks and the execution sequence thereof are determined, corresponding operators can be called to execute the corresponding subtasks according to the corresponding relationship between the subtasks and the operators, namely, the operators are called to process the image.
S305: and (5) sorting and outputting the processing result.
The user can also set the output format, timing, etc. of the processing results, and can also set the output of the processing results to different interactive devices. For example, the device may be connected to a computer through a network port, the device may be connected to a PLC (Programmable Logic Controller) through a serial port, and the like, and the specific communication interface and the interactive device are not limited, and a user may set to send a processing result to different interactive devices. After the processing result of the image is obtained, the processing result can be sent to the interactive device according to the format when the timing is reached according to the output format, timing and interactive device set by the user.
Optionally, the detection photoelectric generation signal belongs to an image acquisition technology;
the image acquisition technique includes: the camera framework, the lens and the light source are integrated to synchronously control to shoot and collect the characteristic image technology of the target object, and the modes comprise lens target surface focusing, triggering signal giving and light source light-emitting hierarchical control.
Optionally, the intelligent camera shoots an image and transmits the image to software, and belongs to an image processing unit;
the image processing unit includes: and integrating the exposure of the camera framework, controlling the image format type, controlling the acquisition of the image through an electronic shutter, and transmitting the image to image processing software.
Optionally, the image processing software includes:
integrating the region of interest after image acquisition, detecting task addition, detecting algorithm setting and parameter editing, and judging tolerance setting;
specifically, the presence or absence of color recognition, model matching, and character reading are detected.
Optionally, the content of the detection result signal is output, and the detection result signal belongs to a network communication device;
the network communication device comprises: the network interface, the serial port and the input/output communication interface are integrated, and soft triggering, hard triggering access and good product/defective product judging signals and data giving are supported.
Another flow of image processing based on the camera may be as shown in fig. 4, including:
s401: starting preparation work;
s402: initializing a system configured in a camera to prepare for receiving a signal;
s403: the camera collects images and triggers signal receiving;
s404: lighting an embedded digital industrial intelligent camera light source based on a DSP;
s405: debugging a camera and carrying out shooting work;
s406: the image processing unit is used for transmitting the image from the acquisition unit to the image processing software unit;
s407: integrating the region of interest after image acquisition;
s408: carrying out filtering treatment;
s409: carrying out characteristic extraction;
s410: performing tolerance judgment and executing image processing;
s411: result sorting, and displaying the result through an interface;
s412: displaying the result through an interface;
s413: outputting the data to other interactive equipment;
s414: the system displays the end state; waiting for the next image.
Optionally, the trigger signal is received, and belongs to an image acquisition technology;
the image acquisition technique includes: the camera framework, the lens and the light source are integrated to synchronously control to shoot and collect the characteristic image technology of the target object, and the modes comprise lens target surface focusing, triggering signal giving and light source light-emitting hierarchical control.
Optionally, the image is transmitted to the image processing software unit by the acquisition unit, and belongs to the image processing unit;
the image processing unit includes: and integrating the exposure of the camera framework, controlling the image format type, controlling the acquisition of the image through an electronic shutter, and transmitting the image to image processing software.
Optionally, the image processing unit includes:
integrating the region of interest after image acquisition, detecting task addition, detecting algorithm setting and parameter editing, and judging tolerance setting;
specifically, the presence or absence of color recognition, model matching, and character reading are detected.
Optionally, the data output belongs to a network communication device;
the network communication device comprises: the network interface, the serial port and the input/output communication interface are integrated, and soft triggering, hard triggering access and good product/defective product judging signals and data giving are supported.
It can be seen from the above embodiments that the embedded digital industrial intelligent camera based on DSP provided by the present invention uses DSP, FPGA, and CPU, and adopts four main steps of image acquisition unit, image processing software, and network communication device to complete the whole operation.
For example, in the embodiment, the user can be guided to perform step-by-step operation, the user can modify the filtering parameters in real time, and the user experience is better.
Corresponding to the above method embodiment, an embodiment of the present invention further provides an image processing apparatus, as shown in fig. 5, including:
an obtaining module 501, configured to obtain an image to be processed;
a first determining module 502, configured to determine each subtask corresponding to the image to be processed and an execution order between each subtask;
and the calling module 503 is configured to sequentially call the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence, so as to process the image to be processed.
As an embodiment, the apparatus further comprises:
an identification module (not shown in the figure) for identifying each detection object in the image to be processed;
the first determining module 502 is specifically configured to: respectively determining each subtask corresponding to each detection object; determining an execution order between each subtask;
the calling module 503 is specifically configured to: and sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the detection object.
As an embodiment, the apparatus further comprises: a second determination module and a filtering module (not shown), wherein,
the second determining module is used for determining a filtering mode corresponding to each detection object;
the filtering module is used for filtering the detection object by utilizing the determined filtering mode;
the calling module 503 is specifically configured to:
and sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the filtered detection object.
As an embodiment, the second determining module is specifically configured to:
displaying interpretation information corresponding to each filtering mode; determining a filtering mode selected by a user for each detection object;
or, determining a filtering mode corresponding to each detection object according to a preset corresponding relation between the object type and the filtering mode;
or, displaying the filtering result of each filtering mode on the detection object; and determining the filtering mode selected by the user for each detection object.
As an embodiment, the calling module 503 is configured to perform any one or more of the following steps:
if the detected object is a graphic code, calling a graphic code scanning operator to scan the detected object;
calling a positioning operator to determine the position of the detection object;
calling a color recognition operator to recognize the color of the detection object;
calling a size identification operator to identify the size of the detection object;
and calling a model matching operator to judge whether the detection object is matched with a preset model.
As an embodiment, the first determining module 502 is specifically configured to:
displaying a plurality of first-level subtasks in the interactive interface;
determining a first-level subtask selected by a user as a first-level subtask to be executed;
displaying a next-level subtask of the first-level subtask to be executed;
and determining the next-level subtask selected by the user as the next-level subtask to be executed.
The apparatus of the foregoing embodiment is used to implement the corresponding method in the foregoing embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
An embodiment of the present invention further provides an electronic device, as shown in fig. 6, which includes a memory 602, a processor 601, and a computer program stored on the memory 602 and capable of running on the processor 601, and when the processor 601 executes the computer program, the image processing method is implemented.
The electronic device may be an image capturing device such as a snapshot machine and a digital industrial camera, or may also be other devices in communication connection with the image capturing device, and is not particularly limited.
In one embodiment, the processor 601 may include: the system comprises a field programmable gate array, a central processing unit and a digital signal processing chip;
the field programmable gate array is used for determining processing logic of the image; calling the central processing unit based on the processing logic to perform feature extraction on the image to obtain feature data; and calling the digital signal processing chip to perform operation processing on the characteristic data.
For example, the electronic device in this embodiment is an embedded digital industrial smart camera based on DSP, and as shown in fig. 2, the hardware part of the camera includes three levels of architectures: FPGA, CPU and DSP; the FPGA is used for logic processing, the CPU is used for data calculation, and the DSP is used for image processing. The three-level architecture can improve the processing speed and accuracy of the device.
In one embodiment, the device includes any one or more of the following communication interfaces: network port, serial port, I/O port. The device can perform data transmission with various interactive devices such as a computer, a PLC, etc. through the communication interfaces, which is not limited specifically.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the idea of the invention, also features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity.
In addition, well known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures for simplicity of illustration and discussion, and so as not to obscure the invention. Furthermore, devices may be shown in block diagram form in order to avoid obscuring the invention, and also in view of the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the present invention is to be implemented (i.e., specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the invention, it should be apparent to one skilled in the art that the invention can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.
While the present invention has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic ram (dram)) may use the discussed embodiments.
The embodiments of the invention are intended to embrace all such alternatives, modifications and variances that fall within the broad scope of the appended claims. Therefore, any omissions, modifications, substitutions, improvements and the like that may be made without departing from the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (14)

1. An image processing method, comprising:
acquiring an image to be processed;
determining each subtask corresponding to the image to be processed and an execution sequence among the subtasks;
and sequentially calling operators corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the image to be processed.
2. The method of claim 1, further comprising, after said acquiring the image to be processed:
identifying each detection object in the image to be processed;
the determining each subtask corresponding to the image to be processed and the execution sequence among each subtask includes:
respectively determining each subtask corresponding to each detection object; determining an execution order between each subtask;
the step of sequentially calling an operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the image to be processed includes:
and sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the detection object.
3. The method of claim 2, further comprising, after said identifying each detected object in the image to be processed:
aiming at each detection object, determining a filtering mode corresponding to the detection object;
filtering the detection object by using the determined filtering mode;
the step of sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the detection object includes:
and sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the filtered detection object.
4. The method according to claim 3, wherein the determining, for each detected object, a filtering manner corresponding to the detected object includes:
displaying interpretation information corresponding to each filtering mode; determining a filtering mode selected by a user for each detection object;
or, determining a filtering mode corresponding to each detection object according to a preset corresponding relation between the object type and the filtering mode;
or, displaying the filtering result of each filtering mode on the detection object; and determining the filtering mode selected by the user for each detection object.
5. The method according to claim 2, wherein the step of sequentially calling the operator corresponding to each subtask to process the detected object includes any one or more of the following steps:
if the detected object is a graphic code, calling a graphic code scanning operator to scan the detected object;
calling a positioning operator to determine the position of the detection object;
calling a color recognition operator to recognize the color of the detection object;
calling a size identification operator to identify the size of the detection object;
and calling a model matching operator to judge whether the detection object is matched with a preset model.
6. The method according to claim 1, wherein the determining each subtask corresponding to the image to be processed and the execution sequence between each subtask comprises:
displaying a plurality of first-level subtasks in the interactive interface;
determining a first-level subtask selected by a user as a first-level subtask to be executed;
displaying a next-level subtask of the first-level subtask to be executed;
and determining the next-level subtask selected by the user as the next-level subtask to be executed.
7. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring an image to be processed;
the first determining module is used for determining each subtask corresponding to the image to be processed and the execution sequence among the subtasks;
and the calling module is used for sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the image to be processed.
8. The apparatus of claim 7, further comprising:
the identification module is used for identifying each detection object in the image to be processed;
the first determining module is specifically configured to: respectively determining each subtask corresponding to each detection object; determining an execution order between each subtask;
the calling module is specifically configured to: and sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the detection object.
9. The apparatus of claim 8, further comprising:
the second determining module is used for determining a filtering mode corresponding to each detection object;
the filtering module is used for filtering the detection object by utilizing the determined filtering mode;
the calling module is specifically configured to:
and sequentially calling the operator corresponding to each subtask in a pre-configured operator library according to the execution sequence to process the filtered detection object.
10. The apparatus of claim 9, wherein the second determining module is specifically configured to:
displaying interpretation information corresponding to each filtering mode; determining a filtering mode selected by a user for each detection object;
or, determining a filtering mode corresponding to each detection object according to a preset corresponding relation between the object type and the filtering mode;
or, displaying the filtering result of each filtering mode on the detection object; and determining the filtering mode selected by the user for each detection object.
11. The apparatus of claim 8, wherein the invoking module is configured to perform any one or more of the following:
if the detected object is a graphic code, calling a graphic code scanning operator to scan the detected object;
calling a positioning operator to determine the position of the detection object;
calling a color recognition operator to recognize the color of the detection object;
calling a size identification operator to identify the size of the detection object;
and calling a model matching operator to judge whether the detection object is matched with a preset model.
12. The apparatus of claim 7, wherein the first determining module is specifically configured to:
displaying a plurality of first-level subtasks in the interactive interface;
determining a first-level subtask selected by a user as a first-level subtask to be executed;
displaying a next-level subtask of the first-level subtask to be executed;
and determining the next-level subtask selected by the user as the next-level subtask to be executed.
13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 6 when executing the program.
14. The apparatus of claim 13, wherein the processor comprises: the system comprises a field programmable gate array, a central processing unit and a digital signal processing chip;
the field programmable gate array is used for determining processing logic of the image; calling the central processing unit based on the processing logic to perform feature extraction on the image to obtain feature data; and calling the digital signal processing chip to perform operation processing on the characteristic data.
CN201910667544.6A 2018-07-27 2019-07-23 Image processing method, device and equipment Pending CN110780982A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2018108474841 2018-07-27
CN201810847484 2018-07-27

Publications (1)

Publication Number Publication Date
CN110780982A true CN110780982A (en) 2020-02-11

Family

ID=69383899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910667544.6A Pending CN110780982A (en) 2018-07-27 2019-07-23 Image processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN110780982A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367643A (en) * 2020-03-09 2020-07-03 北京易华录信息技术股份有限公司 Algorithm scheduling system, method and device
CN111899149A (en) * 2020-07-09 2020-11-06 浙江大华技术股份有限公司 Image processing method and device based on operator fusion and storage medium
CN112783614A (en) * 2021-01-20 2021-05-11 北京百度网讯科技有限公司 Object processing method, device, equipment, storage medium and program product

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001169275A (en) * 1999-12-08 2001-06-22 Nagoya Electric Works Co Ltd Method for integrating image processing by multi-task scheduling
RU36907U1 (en) * 2003-11-14 2004-03-27 Общество с ограниченной ответственностью "Информационные исследования" ("Изучение, Оценивание, Распознавание") AUTOMATED KNOWLEDGE BASE SYSTEM FOR PROCESSING, ANALYSIS AND RECOGNITION OF IMAGES
US20100205606A1 (en) * 2009-02-12 2010-08-12 Panzer Adi System and method for executing a complex task by sub-tasks
CN102274045A (en) * 2011-05-27 2011-12-14 华南理工大学 Parallel real-time medical ultrasonic wide-scene imaging method
CN102835974A (en) * 2012-08-23 2012-12-26 华南理工大学 Method for medical ultrasound three-dimensional imaging based on parallel computer
CN103235974A (en) * 2013-04-25 2013-08-07 中国科学院地理科学与资源研究所 Method for improving processing efficiency of massive spatial data
CN105677812A (en) * 2015-12-31 2016-06-15 华为技术有限公司 Method and device for querying data
CN105900064A (en) * 2014-11-19 2016-08-24 华为技术有限公司 Method and apparatus for scheduling data flow task
US9547929B1 (en) * 2011-04-25 2017-01-17 Honeywell International Inc. User interface device for adaptive systems
WO2018085778A1 (en) * 2016-11-04 2018-05-11 Google Llc Unsupervised detection of intermediate reinforcement learning goals
CN108108819A (en) * 2017-12-15 2018-06-01 清华大学 A kind of big data analysis system and method transboundary

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001169275A (en) * 1999-12-08 2001-06-22 Nagoya Electric Works Co Ltd Method for integrating image processing by multi-task scheduling
RU36907U1 (en) * 2003-11-14 2004-03-27 Общество с ограниченной ответственностью "Информационные исследования" ("Изучение, Оценивание, Распознавание") AUTOMATED KNOWLEDGE BASE SYSTEM FOR PROCESSING, ANALYSIS AND RECOGNITION OF IMAGES
US20100205606A1 (en) * 2009-02-12 2010-08-12 Panzer Adi System and method for executing a complex task by sub-tasks
US9547929B1 (en) * 2011-04-25 2017-01-17 Honeywell International Inc. User interface device for adaptive systems
CN102274045A (en) * 2011-05-27 2011-12-14 华南理工大学 Parallel real-time medical ultrasonic wide-scene imaging method
CN102835974A (en) * 2012-08-23 2012-12-26 华南理工大学 Method for medical ultrasound three-dimensional imaging based on parallel computer
CN103235974A (en) * 2013-04-25 2013-08-07 中国科学院地理科学与资源研究所 Method for improving processing efficiency of massive spatial data
CN105900064A (en) * 2014-11-19 2016-08-24 华为技术有限公司 Method and apparatus for scheduling data flow task
CN105677812A (en) * 2015-12-31 2016-06-15 华为技术有限公司 Method and device for querying data
WO2018085778A1 (en) * 2016-11-04 2018-05-11 Google Llc Unsupervised detection of intermediate reinforcement learning goals
CN108108819A (en) * 2017-12-15 2018-06-01 清华大学 A kind of big data analysis system and method transboundary

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
方惠蓉;: "FPGA在边缘检测中的应用", 信息通信, no. 01 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367643A (en) * 2020-03-09 2020-07-03 北京易华录信息技术股份有限公司 Algorithm scheduling system, method and device
CN111899149A (en) * 2020-07-09 2020-11-06 浙江大华技术股份有限公司 Image processing method and device based on operator fusion and storage medium
CN112783614A (en) * 2021-01-20 2021-05-11 北京百度网讯科技有限公司 Object processing method, device, equipment, storage medium and program product

Similar Documents

Publication Publication Date Title
JP6417702B2 (en) Image processing apparatus, image processing method, and image processing program
CN109727275B (en) Object detection method, device, system and computer readable storage medium
CN110780982A (en) Image processing method, device and equipment
CN112347887B (en) Object detection method, object detection device and electronic equipment
US8666145B2 (en) System and method for identifying a region of interest in a digital image
TW201432621A (en) Method and apparatus for image enhancement and edge verification using at least one additional image
CN107710280B (en) Object visualization method
CN109116129B (en) Terminal detection method, detection device, system and storage medium
CN110400315A (en) A kind of defect inspection method, apparatus and system
CN104486552A (en) Method and electronic device for obtaining images
CN113744348A (en) Parameter calibration method and device and radar vision fusion detection equipment
US10990778B2 (en) Apparatus and method for recognizing barcode based on image detection
CN111008954A (en) Information processing method and device, electronic equipment and storage medium
CN115619787B (en) UV glue defect detection method, system, equipment and medium
CN111630568A (en) Electronic device and control method thereof
CN107577973B (en) image display method, image identification method and equipment
CN113283439B (en) Intelligent counting method, device and system based on image recognition
US10748019B2 (en) Image processing method and electronic apparatus for foreground image extraction
CN103945124A (en) Control method for intelligent camera
CN103942523A (en) Sunshine scene recognition method and device
CN108769521B (en) Photographing method, mobile terminal and computer readable storage medium
US20170006212A1 (en) Device, system and method for multi-point focus
CN116017129A (en) Method, device, system, equipment and medium for adjusting angle of light supplementing lamp
CN107527011B (en) Non-contact skin resistance change trend detection method, device and equipment
US20210074010A1 (en) Image-Processing Method and Electronic Device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination