CN113038266B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN113038266B
CN113038266B CN202110245315.2A CN202110245315A CN113038266B CN 113038266 B CN113038266 B CN 113038266B CN 202110245315 A CN202110245315 A CN 202110245315A CN 113038266 B CN113038266 B CN 113038266B
Authority
CN
China
Prior art keywords
image
information
target
target information
rendered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110245315.2A
Other languages
Chinese (zh)
Other versions
CN113038266A (en
Inventor
开祥钳
孙有新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Zhidong Seiko Electronic Co ltd
Original Assignee
Qingdao Zhidong Seiko Electronic Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Zhidong Seiko Electronic Co ltd filed Critical Qingdao Zhidong Seiko Electronic Co ltd
Priority to CN202110245315.2A priority Critical patent/CN113038266B/en
Publication of CN113038266A publication Critical patent/CN113038266A/en
Application granted granted Critical
Publication of CN113038266B publication Critical patent/CN113038266B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application provides an image processing method, an image processing device and electronic equipment, wherein the method comprises the following steps: acquiring an image to be processed; positioning a target area of target information contained in the image, and extracting the target information from the target area; rendering the target information to the image, and outputting the rendered image. According to the embodiment of the application, the display intuition of the information contained in the image can be improved.

Description

Image processing method and device and electronic equipment
Technical Field
The present application relates to the field of machine vision, and in particular, to an image processing method and apparatus, and an electronic device.
Background
In the manufacturing industry or other automatic control related industries, the image processing by machine vision to assist the industry development has a wide and important application significance. In the prior art in the field of machine vision, the processed image usually does not have enough display intuitiveness, thereby causing certain obstruction to the subsequent application of image processing.
Disclosure of Invention
An object of the present application is to provide an image processing method, an image processing apparatus, and an electronic device, which can improve the display intuitiveness of information included in an image.
According to an aspect of an embodiment of the present application, an image processing method is disclosed, the method including:
acquiring an image to be processed;
positioning a target area of target information contained in the image, and extracting the target information from the target area;
rendering the target information to the image, and outputting the rendered image.
According to an aspect of an embodiment of the present application, there is disclosed an image processing apparatus, the apparatus including:
an acquisition module configured to acquire an image to be processed;
the positioning extraction module is configured to position a target area of target information contained in the image and extract the target information from the target area;
and the rendering module is configured to render the target information to the image and output the rendered image.
In an exemplary embodiment of the present application, the apparatus is configured to:
determining an information extraction template corresponding to the image;
and positioning the target area based on the matching result of the image and the information extraction template.
In an exemplary embodiment of the present application, the apparatus is configured to:
carrying out contour detection on the image to obtain the contour of the region where the information contained in the image is located;
and determining an information extraction template corresponding to the image based on the outline.
In an exemplary embodiment of the present application, the apparatus is configured to:
determining a length ratio between adjacent side lengths of the contour;
and taking the information extraction template corresponding to the length proportion as the information extraction template corresponding to the image.
In an exemplary embodiment of the present application, the apparatus is configured to: if the target information is not extracted from the target area, rendering preset identification information to the image, and outputting the rendered image.
In an exemplary embodiment of the present application, the apparatus is configured to:
determining state information of a target object described by the image based on the target information;
rendering the target information and the state information to the image, and outputting the rendered image.
In an exemplary embodiment of the present application, the apparatus is configured to:
acquiring a plurality of images to be processed;
performing image splicing on the plurality of images to obtain a spliced image consisting of the plurality of images;
respectively positioning a target area of target information contained in each image in the spliced image, and extracting the target information from the target area;
rendering the target information to the stitched image, and outputting the rendered stitched image.
In an exemplary embodiment of the present application, the apparatus is configured to:
determining whether the target object described by the image passes a preset test or not based on the target information;
and if the target object does not pass the test, secondarily rendering the target information on the rendered image, and outputting a secondarily rendered image.
According to an aspect of an embodiment of the present application, an electronic device is disclosed, including: a memory storing computer readable instructions; a processor reading computer readable instructions stored by the memory to perform the method of any of the preceding claims.
According to an aspect of embodiments of the present application, a computer program medium is disclosed, having computer readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform the method of any of the preceding claims.
According to an aspect of embodiments herein, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations described above.
In the embodiment of the application, the target information contained in the target area in the extracted image is rendered to the extracted image, so that the target information is displayed more emphatically in the extracted image, and the display intuitiveness of the target information in the image is improved.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
Fig. 1 shows a flow chart of an image processing method according to an embodiment of the present application.
Fig. 2 shows a flow chart of an image processing method according to an embodiment of the application.
FIG. 3 shows a schematic diagram of stitched images according to an embodiment of the present application.
FIG. 4 is a schematic diagram illustrating the stitched image of FIG. 3 after being rendered according to an embodiment of the present application.
FIG. 5 illustrates a rendered schematic view of the stitched image of FIG. 3 according to an embodiment of the present application.
FIG. 6 illustrates a rendered schematic view of the stitched image of FIG. 3 according to an embodiment of the present application.
FIG. 7 illustrates a rendered schematic view of the stitched image of FIG. 3 according to an embodiment of the present application.
FIG. 8 shows a flow diagram for image processing of a product under production manufacturing according to an embodiment of the present application.
Fig. 9 shows a block diagram of an image processing apparatus according to an embodiment of the present application.
FIG. 10 shows a hardware diagram of an electronic device according to an embodiment of the application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The drawings are merely schematic illustrations of the present application and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more example embodiments. In the following description, numerous specific details are provided to give a thorough understanding of example embodiments of the present application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, steps, and so forth. In other instances, well-known structures, methods, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The application provides an image processing method which is mainly used for processing images in the field of machine vision so as to improve the display intuitiveness of information contained in the images.
Fig. 1 shows a flow chart of an image processing method according to an embodiment of the present application.
Referring to fig. 1, in this embodiment, after an image to be processed is acquired, a target area of target information included in the image is located, and the target information is extracted from the target area; and then rendering the target information to the image and outputting the rendered image. The target information has higher intuitiveness on visual display after being rendered.
In one embodiment, the target information is rendered by increasing the area occupied by the target information in the image.
In one embodiment, the target information is rendered by highlighting the target information.
In one embodiment, the target information is rendered by performing a deformation process on the target information (e.g., the target information is text information and the target information is tilted).
In one embodiment, the image processing method proposed in the present application is performed by VisionPro after secondary development. Wherein VisionPro is a machine vision processing component.
Specifically, in this embodiment, the VisionPro is secondarily developed on the basis of an open interface of the VisionPro, so that only the VisionPro of the captured original image can be maintained and displayed before being developed, after the secondary development, the target information in the captured original image is extracted by executing the image processing method provided by the present application, the target information is rendered to the original image in real time, and the rendered image is output and displayed in real time.
It should be noted that the embodiment is only an example and should not limit the function and the application scope of the present application. Specifically, the image processing method provided by the application can also be executed by other machine vision processing components after secondary development.
In one embodiment, the object described by the image is tested in advance before being processed by the image processing, and a test result of whether the object passes the test is obtained. In the image processing process, the image of the target object is captured, the target information in the image is extracted, and then the test result of the target object is obtained based on the target information, so that whether the target object passes the test or not is determined.
If the target object does not pass the test, performing secondary rendering on the target information on the basis of the rendered image rendered once, and outputting a secondary rendered image.
The embodiment has the advantage that the display intuitiveness of the target information of the image of the target object which fails to pass the test is further improved through the secondary rendering, so that the target object which fails to pass the test can be processed in the subsequent processing process.
In one embodiment, the first rendering of the target information is different from the second rendering of the target information. For example: the first rendering of the target information is to increase the area occupied by the target information in the image, and the second rendering of the target information is to highlight the target information, so that a highlighted secondary rendered image with the increased area occupied by the target information is obtained.
Fig. 2 shows a flow chart of an image processing method according to an embodiment of the application.
Referring to fig. 2, in this embodiment, the products are transported on a product line. During the transportation process, the production management control MES system tests the product (for example, performs a wireless test to determine whether the wireless communication function of the product is normal) and stores the test result. In the process of testing the product by the MES system, the product line can be kept in a transport state all the time without stopping running.
After the MES system tests, the vision system captures an image of a product, extracts target information from the image, renders the target information to the image, and outputs the rendered image. And, the vision system determines from the MES system whether the product passes the test based on the extracted target information. For example: the vision system determines from the MES system whether the product corresponding to the SN information passes the test based on the extracted SN (Serial Number) information.
And if the product passes the test, the packaging system confirms the target information of the product according to the rendered image output by the vision system, confirms that the product passes the test, and further packages the product so as to leave the factory with the packaged product.
And if the product does not pass the test, the vision system carries out secondary rendering on the rendered image of the target information and outputs the secondarily rendered image. And the packaging system confirms the target information of the product according to the secondary rendered image output by the vision system, confirms that the product does not pass the test, and further rejects the product so as not to package the product and leave the factory.
In one embodiment, there are multiple acquired images to be processed. In this case, the plurality of images are subjected to image stitching for the purpose of expanding the field of view, resulting in a stitched image composed of the plurality of images in common; respectively positioning a target area of each image to be processed in the spliced image, and respectively extracting target information contained in the corresponding image to be processed from each target area; and then rendering the extracted target information to the spliced image, and outputting the rendered spliced image.
FIG. 3 shows a schematic diagram of a stitched image according to an embodiment of the present application.
Referring to fig. 3, in this embodiment, 6 images in the acquired multiple images to be processed are stitched, so as to obtain a stitched image including the 6 images in the field of view. Wherein each image mainly comprises three regions: a two-dimensional code region where the two-dimensional code information is located, an SN region where the SN information is located, and other regions where other information (for example, a logo) is located.
In the embodiment, after the spliced images are obtained, the target area of each image is respectively positioned, and then the target information of the corresponding image is respectively extracted from each target area; and then rendering the extracted target information to the spliced image, and outputting the rendered spliced image.
In one embodiment, the extracted target information is rendered to the stitched image in a manner of rendering the target information extracted from the stitched image to the region where the corresponding image is located.
The embodiment has the advantages that the target information extracted from the spliced image is rendered to the area where the corresponding image is located, so that the visual association degree of the extracted target information and the original image is ensured while the display intuition degree of the target information is improved.
FIG. 4 is a schematic diagram illustrating the stitched image of FIG. 3 after being rendered according to an embodiment of the present application.
Referring to fig. 4, in this embodiment, the target area is a two-dimensional code area, and the target information is two-dimensional code information included in the two-dimensional code area. The two-dimensional code information is website information corresponding to the target object described by the corresponding image (for example, website information downloaded from a product specification corresponding to the target object; website information of a use course corresponding to the target object).
Respectively extracting two-dimensional code information of corresponding images from 6 target areas in the spliced images, and respectively rendering the extracted 6 two-dimensional code information to areas where the corresponding images are located: rendering the two-dimensional code information 'www.address 1' to an area where an image containing the information is located; address2 is rendered to the area where the image containing the two-dimensional code information is located.
FIG. 5 is a schematic diagram illustrating the stitched image of FIG. 3 after being rendered according to an embodiment of the present application.
Referring to fig. 5, in this embodiment, the target region is an SN region, and the target information is SN information included in the SN region.
Respectively extracting SN information of corresponding images from 6 target areas in the spliced image, and respectively rendering the extracted 6 SN information to areas where the corresponding images are located: the SN information "11111" is rendered to "SN: 11111' in the area of the image; the SN information "11112" is rendered to "SN: 11112' in the area where the image is located; similarly, rendering representation of other SN information is not described herein.
In one embodiment, the extracted target information is rendered to the stitched image by rendering the target information extracted from the stitched image to an area outside each image.
The embodiment has the advantages that the extracted target information is rendered to the area outside each image, so that the display intuition of the target information is improved, the rendering processing of the target information can avoid the interference on the original image information, and the extracted target information can be uniformly displayed.
FIG. 6 is a schematic diagram illustrating the stitched image of FIG. 3 after being rendered according to an embodiment of the present application.
Referring to fig. 6, the target area is an SN area, and the target information is SN information included in the SN area.
Respectively extracting SN information of corresponding images from 6 target areas in the spliced image, and rendering the extracted 6 SN information to an area outside any image: the 6 SN information of '11111', '11112', etc. are rendered to the blank area of the side edge of the spliced image. Further, the 6 images are sequenced from left to right and from top to bottom, and the extracted 6 SN information is rendered to a blank area on the side of the stitched image in sequence according to the sequence of the corresponding images.
In one embodiment, the target area of the target information in the image is located by an information extraction template.
In this embodiment, the information extraction template describes at least a position distribution of the target information in the information extraction template. After an image to be processed is obtained, an information extraction template corresponding to the image is determined; matching the image with the information extraction template; and determining the position distribution of the target information in the image according to the matching result, thereby positioning the target area of the target information in the image.
In one embodiment, the specification of the image to be processed is a single fixed. And presetting a corresponding fixed information extraction template according to the fixed specification, so that the target area of the target information in the image is positioned through the fixed information extraction template every time.
In one embodiment, the specification of the image to be processed is variable. And determining an information extraction template corresponding to the image by contour detection aiming at the image with variable specifications.
In this embodiment, corresponding information extraction templates are set in advance for images of various specifications, respectively. After the image to be processed is obtained, carrying out contour detection on the image to obtain the contour of the region where the information contained in the image is located. The contours of images with different specifications also have different characteristics. Thus, based on the processing of the contour, the information extraction template corresponding to the image is determined.
An advantage of this embodiment is that in practical applications the specifications of the image are typically associated with the type of object described by the image. And by setting a plurality of information extraction templates, the compatibility of image processing on various types of target objects is realized. For example: in a production line that produces a plurality of models of products, the specifications of captured product images differ from model to model of the product. By setting the corresponding information extraction template according to the product image specification of each product model, the image processing of the products in the production line can be quickly and uninterruptedly realized even if the production line simultaneously produces products with various models or the production line changes the models of the produced products midway in the process of carrying out the image processing of the products in the production line.
In one embodiment, the information extraction template is determined by the length ratio between adjacent side lengths of the contour.
In this embodiment, corresponding information extraction templates are set in advance for images of various specifications, respectively. And the length proportion between the adjacent side lengths of the outlines of the images with various specifications is predetermined, so that the corresponding relation between the length proportion between the adjacent side lengths of the outlines and the information extraction template is determined.
After an image to be processed is obtained and the outline of the image is obtained, the side length of each outline is positioned, and then the length proportion between the adjacent side lengths is determined. And further determining an information extraction template corresponding to the length proportion, namely the information extraction template corresponding to the image.
In one embodiment, the template a is extracted in advance for the image setting information of the specification a, and the template B is extracted for the image setting information of the specification B. And the length ratio between adjacent side lengths of the profile of specification a is predetermined to be 4.
After an image to be processed is obtained and the outline of the image is obtained, the side length of each outline is positioned, and then the length proportion between the adjacent side lengths is determined. If the length ratio between the adjacent side lengths of the outline of the image is 4; if the length ratio between the adjacent side lengths of the outline of the image is 16.
In one embodiment, the information extraction template is determined by a scaling process and an alignment process of the contour.
In this embodiment, corresponding information extraction templates are set in advance for images of various specifications, respectively.
After an image to be processed is acquired and the outline of the image is obtained, the outline is circularly matched as follows: scaling the outline according to a preset proportion, aligning the scaled outline with the outline of each information extraction template, and determining the matching degree between the scaled outline and each information extraction template; if the matching degree between the image and an information extraction template is higher than or equal to a preset threshold value, determining the information extraction template as the information extraction template corresponding to the image, and stopping circulation; and if the matching degree between the contour and each information extraction template is smaller than a preset threshold value, scaling the contour according to another preset proportion, aligning and determining the matching degree. The larger the matching degree is, the higher the overlapping degree of the scaled contour and the contour of the information extraction template is.
In one embodiment, the predetermined ratios at which the profile is scaled include 0.8, 0.9, 1.0, 1.2.
In one embodiment, the aligning process of the scaled outline with the outline of each information extraction template includes: translation processing and rotation processing.
In an embodiment, if the target information cannot be extracted from the target area, the preset identification information is rendered to the image, and the rendered image is output. The identification information is mainly used for showing that the target information cannot be successfully extracted from the target area of the image.
The embodiment has the advantage that the image which needs to be recaptured or additionally processed can be intuitively displayed by rendering the identification information to the image from which the target information is not extracted.
In an embodiment, if the SN information cannot be extracted from the target region of an image, the identifier information "NoRead" is rendered to the image, and the rendered image with "NoRead" is output to show that the SN information cannot be successfully extracted from the target region of the image.
FIG. 7 is a schematic diagram illustrating the stitched image of FIG. 3 after being rendered according to an embodiment of the present application.
Referring to fig. 7, the stitched image is formed by stitching 6 images; the target area is an SN area, and the target information is SN information included in the SN area.
Sorting the 6 images from left to right and from top to bottom, and respectively extracting corresponding SN information results from respective target areas of the 6 images: the SN information of the 2 nd image and the SN information of the 6 th image cannot be successfully extracted. Respectively rendering the extracted SN information of the 1 st image, the SN information of the 3 rd image, the SN information of the 4 th image and the SN information of the 5 th image to the areas where the corresponding images are located, and rendering the 'NoRead' to the area where the 2 nd image is located and the area where the 6 th image is located.
In one embodiment, after target information is extracted from a target area, state information of a target object contained in an image is determined based on the target information; and then rendering the target information and the state information to an image, and outputting the rendered image.
The embodiment has the advantage that the state information is rendered to the extracted image, so that the rendered image can visually show the state of the target object in the rendered image.
In one embodiment, the state information of the target object is used to describe a process node where the target object is located in the production process.
In one embodiment, the object in the image is a device that is transported on a production line and is subject to other production processes during the transport process. Capturing an image of the equipment on a production line through a machine vision sensor (such as a camera), and extracting SN information in the image; further sending the SN information to a third-party management system (such as a production management control (MES) system) so that the third-party management system determines the process node of the equipment currently in the production process according to the SN information; further acquiring state information returned by the third-party management system and used for describing the process node where the equipment is located in the current production process; and rendering the SN information and the state information to an image, and outputting the rendered image.
In one embodiment, the status information of the target object is used to describe whether the target object has been tested.
In one embodiment, the target information and the state information are rendered to the same image together, and the rendered image rendered with the target information and the state information simultaneously is output. Therefore, the rendered image can simultaneously and intuitively show the target information and the state of the target object.
In an embodiment, the target information and the state information are rendered to two same images respectively, and the rendered image rendered with the target information and the rendered image rendered with the state information are output respectively. Therefore, the rendered image can independently and intuitively display the target information and the state of the target object, and the interference between the target information and the state information is avoided.
FIG. 8 shows a flow diagram of image processing of a product under production manufacturing according to an embodiment of the present application.
Referring to FIG. 8, in this embodiment, the vision system captures an image of a product on the production line; further, the product model of a product in the image is determined by carrying out image processing on the captured image; loading a matching template according to the product model, and matching the loaded matching template with the captured image; and then positioning the coordinates and the outline Of the product in the captured image according to the matching result, and determining the Region Of Interest (ROI) in the captured image. In order to enlarge the image visual field, a plurality of images of a plurality of grabbed products can be spliced to obtain a spliced image; matching the loaded matching template with the spliced image; and then determining the ROI in the spliced image.
After the ROI is determined, SN information of the product is read from the ROI.
If the SN information in the ROI is not successfully read, image processing (for example, enhancement processing is carried out on the image, sharpening processing is carried out on the image) and parameter adjustment (for example, scaling of the ROI is adjusted, and the search angle of the ROI is adjusted) is carried out on the ROI, and the SN information in the ROI is read again. If the SN information in the ROI is not successfully read again, an identifier 'NoRead' for describing the unsuccessful reading is saved in a dictionary for subsequent rendering.
And if the SN information in the ROI is successfully read, storing the read SN information into a dictionary for subsequent rendering, and determining whether all the ROIs are read. If all ROIs have not been read, the SN information for the product continues to be read from the remaining ROIs until all ROIs have been read.
After all ROIs are read, the information stored in the dictionary is formatted and uploaded to MES. And the MES determines whether the SN information is effective or not according to the received SN information.
For invalid SN information, the MES feeds back fault information (e.g., "NG" information) to the vision system.
And aiming at the effective SN information, the MES further judges whether the product corresponding to the SN information passes the stack or not. Wherein, the stack-passing is used for describing whether the product passes the test; if the product passes the stack, the product passes the test, and if the product does not pass the stack, the product fails the test. If the product corresponding to the SN information does not cross the stack, the MES feeds back the fault information to the visual system; and if the product corresponding to the SN information passes the stack, the MES feeds back qualified information to the vision system.
And the visual system renders the information read from the dictionary or the fault information fed back by the MES into an image according to the information fed back by the MES and outputs the image.
It should be noted that the embodiment is only an example and should not limit the function and the application scope of the present application.
Fig. 9 shows an image processing apparatus according to an embodiment of the present application, the apparatus including:
an acquisition module 110 configured to acquire an image to be processed;
a positioning extraction module 120, configured to position a target area of target information included in the image, and extract the target information from the target area;
a rendering module 130 configured to render the target information to the image and output the rendered image.
In an exemplary embodiment of the present application, the apparatus is configured to:
determining an information extraction template corresponding to the image;
and positioning the target area based on the matching result of the image and the information extraction template.
In an exemplary embodiment of the present application, the apparatus is configured to:
carrying out contour detection on the image to obtain the contour of the region where the information contained in the image is located;
and determining an information extraction template corresponding to the image based on the outline.
In an exemplary embodiment of the present application, the apparatus is configured to:
determining a length ratio between adjacent side lengths of the contour;
and taking the information extraction template corresponding to the length proportion as the information extraction template corresponding to the image.
In an exemplary embodiment of the present application, the apparatus is configured to: if the target information is not extracted from the target area, rendering preset identification information to the image, and outputting the rendered image.
In an exemplary embodiment of the present application, the apparatus is configured to:
determining state information of a target object described by the image based on the target information;
rendering the target information and the state information to the image, and outputting the rendered image.
In an exemplary embodiment of the present application, the apparatus is configured to:
acquiring a plurality of images to be processed;
performing image splicing on the plurality of images to obtain a spliced image consisting of the plurality of images;
respectively positioning a target area of target information contained in each image in the spliced image, and extracting the target information from the target area;
rendering the target information to the stitched image, and outputting the rendered stitched image.
In an exemplary embodiment of the present application, the apparatus is configured to:
determining whether the target object described by the image passes a preset test or not based on the target information;
and if the target object does not pass the test, secondarily rendering the target information in the rendered image, and outputting a secondarily rendered image.
An electronic apparatus 20 according to an embodiment of the present application is described below with reference to fig. 10. The electronic device 20 shown in fig. 10 is only an example, and should not bring any limitation to the functions and the use range of the embodiment of the present application.
As shown in fig. 10, the electronic device 20 is in the form of a general purpose computing device. The components of the electronic device 20 may include, but are not limited to: the at least one processing unit 210, the at least one memory unit 220, and a bus 230 connecting the various system components (including the memory unit 220 and the processing unit 210).
Wherein the storage unit stores program code executable by the processing unit 210 to cause the processing unit 210 to perform the steps according to various exemplary embodiments of the present invention described in the description part of the above exemplary methods of the present specification. For example, the processing unit 210 may perform various steps as shown in fig. 1.
The storage unit 220 may include readable media in the form of volatile storage units, such as a random access memory unit (RAM) 2201 and/or a cache memory unit 2202, and may further include a read only memory unit (ROM) 2203.
The storage unit 220 may also include a program/utility 2204 having a set (at least one) of program modules 2205, such program modules 2205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment.
Bus 230 may be any bus representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 20 may also communicate with one or more external devices 300 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the data processing electronic device 20, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 20 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 250. An input/output (I/O) interface 250 is connected to the display unit 240. Also, the electronic device 20 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 260. As shown, the network adapter 260 communicates with the other modules of the electronic device 20 over the bus 230. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 20, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, to name a few.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present application.
In an exemplary embodiment of the present application, there is also provided a computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to perform the method described in the above method embodiment section.
According to an embodiment of the present application, there is also provided a program product for implementing the method in the above method embodiment, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods herein are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.

Claims (8)

1. An image processing method, characterized in that the method comprises:
acquiring an image to be processed, wherein the image is an image of a product captured after an MES system is tested;
positioning a target area of target information contained in the image, and extracting the target information from the target area;
rendering the target information to the image, outputting the rendered image, determining whether the product passes the test or not from the MES system according to the target information, if the product passes the test, sending the rendered image to a packaging system so that the packaging system packages the product, if the product does not pass the test, performing secondary rendering on the target information on the rendered image, outputting the secondary rendered image, and sending the secondary rendered image to the packaging system so that the packaging system rejects the product;
determining state information of a target object described by the image based on the target information;
rendering the target information and the state information to the image, and outputting the rendered image.
2. The method of claim 1, wherein locating a target region of target information contained in the image comprises:
determining an information extraction template corresponding to the image;
and positioning the target area based on the matching result of the image and the information extraction template.
3. The method of claim 2, wherein determining the information extraction template corresponding to the image comprises:
carrying out contour detection on the image to obtain a contour of an area where information contained in the image is located;
and determining an information extraction template corresponding to the image based on the outline.
4. The method of claim 3, wherein determining, based on the contour, an information extraction template to which the image corresponds comprises:
determining a length ratio between adjacent side lengths of the contour;
and taking the information extraction template corresponding to the length proportion as the information extraction template corresponding to the image.
5. The method of claim 1, further comprising:
and if the target information is not extracted from the target area, rendering preset identification information to the image, and outputting the rendered image.
6. The method of claim 1, wherein acquiring the image to be processed comprises: acquiring a plurality of images to be processed;
positioning a target area of target information contained in the image, and extracting the target information from the target area, including:
performing image splicing on the plurality of images to obtain a spliced image consisting of the plurality of images;
respectively positioning a target area of target information contained in each image in the spliced image, and extracting the target information from the target area;
rendering the target information to the image and outputting a rendered image, including: rendering the target information to the stitched image, and outputting the rendered stitched image.
7. An image processing apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is configured to acquire an image to be processed, and the image is an image of a product captured after an MES system is tested;
the positioning extraction module is configured to position a target area of target information contained in the image and extract the target information from the target area;
the rendering module is configured to render the target information to the image, output a rendered image, determine whether the product passes the test from the MES system according to the target information, send the rendered image to a packaging system if the product passes the test, so that the packaging system packages the product for delivery, perform secondary rendering on the target information on the rendered image if the product does not pass the test, output the secondary rendered image, and send the secondary rendered image to the packaging system, so that the packaging system rejects the product;
determining state information of a target object described by the image based on the target information;
rendering the target information and the state information to the image, and outputting the rendered image.
8. An electronic device, comprising:
a memory storing computer readable instructions;
a processor that reads computer readable instructions stored by the memory to perform the method of any of claims 1-7.
CN202110245315.2A 2021-03-05 2021-03-05 Image processing method and device and electronic equipment Active CN113038266B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110245315.2A CN113038266B (en) 2021-03-05 2021-03-05 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110245315.2A CN113038266B (en) 2021-03-05 2021-03-05 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113038266A CN113038266A (en) 2021-06-25
CN113038266B true CN113038266B (en) 2023-02-24

Family

ID=76468192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110245315.2A Active CN113038266B (en) 2021-03-05 2021-03-05 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113038266B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114833038A (en) * 2022-04-15 2022-08-02 苏州鸿优嘉智能科技有限公司 Gluing path planning method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201839391U (en) * 2010-11-10 2011-05-18 韩旭 Embedded camera equipment for detecting product quality
CN104751093A (en) * 2013-12-31 2015-07-01 阿里巴巴集团控股有限公司 Method and device for acquiring image identification code displayed by host equipment
CN106529973A (en) * 2016-11-02 2017-03-22 深圳市幻实科技有限公司 Anti-counterfeiting method and apparatus based on augmented reality
CN108564082A (en) * 2018-04-28 2018-09-21 苏州赛腾精密电子股份有限公司 Image processing method, device, server and medium
JP2019062402A (en) * 2017-09-27 2019-04-18 三菱電機インフォメーションシステムズ株式会社 Image display device and image display program
CN110555171A (en) * 2018-03-29 2019-12-10 腾讯科技(深圳)有限公司 Information processing method, device, storage medium and system
CN112085795A (en) * 2019-12-31 2020-12-15 Oppo广东移动通信有限公司 Article positioning method, device, equipment and storage medium
CN112132599A (en) * 2019-06-24 2020-12-25 北京沃东天骏信息技术有限公司 Image processing method and device, computer readable storage medium and electronic device
CN112330583A (en) * 2019-07-16 2021-02-05 青岛智动精工电子有限公司 Product defect detection method, device, equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150564A (en) * 2013-03-28 2013-06-12 冶金自动化研究设计院 Plate surface code spraying character recognition device and method thereof
CN107545566A (en) * 2017-07-27 2018-01-05 深圳市易飞扬通信技术有限公司 Visible detection method and system
JP3218742U (en) * 2017-10-27 2018-11-08 ベイジン ジュンタイイノベーション テクノロジー カンパニー,リミティッド Recognition system based on optical character recognition vision
CN110542695A (en) * 2019-09-18 2019-12-06 歌尔股份有限公司 method and system for checking marking quality and repeated codes
CN111652541B (en) * 2020-05-07 2022-11-01 美的集团股份有限公司 Industrial production monitoring method, system and computer readable storage medium
CN111891745A (en) * 2020-08-10 2020-11-06 珠海格力智能装备有限公司 Processing method and device for loading and unloading and loading and unloading system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201839391U (en) * 2010-11-10 2011-05-18 韩旭 Embedded camera equipment for detecting product quality
CN104751093A (en) * 2013-12-31 2015-07-01 阿里巴巴集团控股有限公司 Method and device for acquiring image identification code displayed by host equipment
CN106529973A (en) * 2016-11-02 2017-03-22 深圳市幻实科技有限公司 Anti-counterfeiting method and apparatus based on augmented reality
JP2019062402A (en) * 2017-09-27 2019-04-18 三菱電機インフォメーションシステムズ株式会社 Image display device and image display program
CN110555171A (en) * 2018-03-29 2019-12-10 腾讯科技(深圳)有限公司 Information processing method, device, storage medium and system
CN108564082A (en) * 2018-04-28 2018-09-21 苏州赛腾精密电子股份有限公司 Image processing method, device, server and medium
CN112132599A (en) * 2019-06-24 2020-12-25 北京沃东天骏信息技术有限公司 Image processing method and device, computer readable storage medium and electronic device
CN112330583A (en) * 2019-07-16 2021-02-05 青岛智动精工电子有限公司 Product defect detection method, device, equipment and storage medium
CN112085795A (en) * 2019-12-31 2020-12-15 Oppo广东移动通信有限公司 Article positioning method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113038266A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN111340796B (en) Defect detection method and device, electronic equipment and storage medium
KR20150063703A (en) A Method for Block Inspection of a Vessel Using Augmented Reality Technology
CN113038266B (en) Image processing method and device and electronic equipment
CN109241998B (en) Model training method, device, equipment and storage medium
CN111382740A (en) Text picture analysis method and device, computer equipment and storage medium
CN112967272A (en) Welding defect detection method and device based on improved U-net and terminal equipment
CN113946510A (en) WEB page testing method, device and equipment and computer storage medium
CN114638294A (en) Data enhancement method and device, terminal equipment and storage medium
CN110991446A (en) Label identification method, device, equipment and computer readable storage medium
CN113469944A (en) Product quality inspection method and device and electronic equipment
CN110210314B (en) Face detection method, device, computer equipment and storage medium
CN112989256B (en) Method and device for identifying web fingerprint in response information
CN115374517A (en) Testing method and device for wiring software, electronic equipment and storage medium
CN110083807B (en) Contract modification influence automatic prediction method, device, medium and electronic equipment
CN115359302A (en) Coin identification method, system and storage medium
CN112580334A (en) File processing method, file processing device, server and storage medium
CN112308782A (en) Panoramic image splicing method and device, ultrasonic equipment and storage medium
CN113362227A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112446850A (en) Adaptation test method and device and electronic equipment
US9189702B2 (en) Imaging system for determining multi-view alignment
CN114578961B (en) Automatic data input system based on action recording
CN117112446B (en) Editor debugging method and device, electronic equipment and medium
US11776249B2 (en) Method for identifying non-inspectable objects in packaging, and apparatus and storage medium applying method
CN117370767B (en) User information evaluation method and system based on big data
CN117033239A (en) Control matching method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant