CN117274642B - Network image data acquisition and analysis method and system - Google Patents

Network image data acquisition and analysis method and system Download PDF

Info

Publication number
CN117274642B
CN117274642B CN202311217993.3A CN202311217993A CN117274642B CN 117274642 B CN117274642 B CN 117274642B CN 202311217993 A CN202311217993 A CN 202311217993A CN 117274642 B CN117274642 B CN 117274642B
Authority
CN
China
Prior art keywords
image
contour
screened
target
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311217993.3A
Other languages
Chinese (zh)
Other versions
CN117274642A (en
Inventor
伍乙生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhaoqing Medical College
Original Assignee
Zhaoqing Medical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhaoqing Medical College filed Critical Zhaoqing Medical College
Priority to CN202311217993.3A priority Critical patent/CN117274642B/en
Publication of CN117274642A publication Critical patent/CN117274642A/en
Application granted granted Critical
Publication of CN117274642B publication Critical patent/CN117274642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a network image data acquisition and analysis method and a system, wherein the method comprises the following steps: acquiring a target contour based on a target image, comparing the target contour with the image contour, and primarily screening the image set to be screened to obtain an updated image set to be screened; acquiring a corresponding image area from the updated images to be screened in the images to be screened based on a preset target image area, calculating a gray difference value based on the gray value of the corresponding image area and the gray value of the corresponding area of the target image, and determining a first converted pixel value corresponding to the gray difference value based on a preset first comparison table; acquiring an image to be rendered, wherein the image to be rendered comprises a first rendering area corresponding to each target image area, and rendering the first rendering area based on a first converted pixel value to obtain a rendered image; and inputting the rendered image into a pre-trained first neural network model, and determining whether the image to be screened corresponding to the rendered image is an approximate image.

Description

Network image data acquisition and analysis method and system
Technical Field
The application belongs to the technical field of image analysis, and particularly relates to a network image data acquisition and analysis method and system.
Background
In the field of medical image analysis, accurate computer medical image analysis tools can assist doctors in analysis, and analysis efficiency and accuracy are improved.
In the analysis of medical images, computer-aided analysis screening crowd's medical images can provide important references for medical personnel to the prediction and the typing of disease, for example compares standard image and a large amount of screening personnel's medical images through current machine learning model, and then screens out the crowd that is similar with standard image, and then further carries out the mode of screening through the doctor, promotes doctor's work efficiency.
However, the existing medical image is usually screened by establishing a neural network model, calculating the similarity between each image and a standard image, setting a similarity threshold value to determine whether the images are similar images, directly calculating the similarity of the images in the prior art, directly participating all pixel points in calculation, and not analyzing the key points, so that the calculation efficiency is lower on the premise of larger calculation amount.
Disclosure of Invention
According to the network image data acquisition and analysis method, the image set to be screened is initially screened through the outline of the target image, the rendering image is built again to process the neural network model, so that on one hand, the calculated amount is reduced, and on the other hand, the calculation accuracy is improved through analysis of the key positions.
In a first aspect, an embodiment of the present application provides a network image data acquisition and analysis method, where the steps of the method include:
acquiring a target contour based on a target image, wherein the target contour comprises a contour peripheral line and a contour inner peripheral line, the contour peripheral line is a circle where a circle of pixel points are located when the target image extends inwards from the outermost circle of pixel points circle by circle, and the contour inner peripheral line is a circle where a circle of pixel points are located when the target image extends inwards from the outermost circle of pixel points circle by circle and finally contacts with the circle of pixel points;
acquiring an image contour of each image to be screened in an image set to be screened, comparing the image contour with the target contour, and primarily screening the image set to be screened to obtain an updated image set to be screened;
acquiring a corresponding image area from the updated images to be screened in the images to be screened based on a preset target image area, calculating a gray difference value based on the gray value of the corresponding image area and the gray value of the corresponding area of the target image, and determining a first converted pixel value corresponding to the gray difference value based on a preset first comparison table;
acquiring an image to be rendered, wherein the image to be rendered comprises a first rendering area corresponding to each target image area, and rendering each first rendering area based on the first converted pixel value to obtain a rendered image;
and inputting the rendered image into a pre-trained first neural network model, and determining whether an image to be screened corresponding to the rendered image is an approximate image.
By adopting the scheme, the image set to be screened is initially screened through the outline of the target image, the image with larger outline difference is discharged from the screening range, and the calculated amount of subsequent calculation is reduced; further, the scheme only needs to extract the image area of the target image area, calculates the gray level difference value based on the gray level value of the corresponding image area and the gray level value of the corresponding area of the target image, and builds the rendering image again, and finally processes the rendering image with only a small amount of computing elements by using the neural network model to judge whether the rendering image is an approximate image, so that the computing amount is reduced on one hand, and the computing accuracy is improved through analyzing the key positions on the other hand.
In some embodiments of the present invention, the step of obtaining an image contour of each image to be screened in the image set to be screened, comparing the image contour with the target contour, and performing preliminary screening on the image set to be screened to obtain an updated image set to be screened includes:
performing first screening on the images to be screened in the images to be screened based on the image profile and the target profile to obtain a first image set to be screened;
obtaining contour images based on the image contour of each image to be screened in the first image set to be screened, inputting the contour images into a pre-trained first neural network model, outputting first similarity by the first neural network model, and performing second screening on the images to be screened based on the first similarity to obtain an updated image set to be screened.
By adopting the scheme, the method and the device have the advantages that the image set to be screened is screened twice in the process of updating the image set to be screened, the contours of the images are directly compared in the first screening process, the images with larger contour differences are discharged out of the screening range, in the second screening process, only the contour images with smaller data are required to be analyzed, part of the images are discharged out of the screening range again based on the contour differences, the calculated amount of subsequent calculation is reduced, and meanwhile, the image screening precision is improved.
In some embodiments of the present invention, in the step of performing the first screening on the to-be-screened image in the to-be-screened image set based on the image contour and the target contour, whether there is a coincidence between the image contour and an area surrounded by a contour peripheral line and a contour inner peripheral line of the target contour is determined, and if there is a coincidence, the to-be-screened image is selected into the first to-be-screened image set.
By adopting the scheme, in the first screening process, the contours of the images are directly compared, the images with larger contour differences are removed from the screening range, the screening range can be finished by simple comparison, and on the premise of smaller calculated amount, the images with larger partial differences are deleted, so that the subsequent calculated amount is reduced.
In some embodiments of the present invention, in the step of determining whether there is a coincidence between the image contour and a region surrounded by a contour peripheral line and a contour inner peripheral line of the target contour;
determining a contour peripheral line to be screened and a contour inner peripheral line to be screened based on the image contour;
judging whether at least one of the contour peripheral line to be screened and the contour inner peripheral line to be screened is positioned in an area surrounded by the contour peripheral line and the contour inner peripheral line, if yes, overlapping exists; if not, no coincidence exists.
In some embodiments of the present invention, in the step of obtaining a contour image based on the image contour of each image to be screened in the first image set to be screened, gray values of pixels where the image contour of the image to be screened is located are reserved, and gray values of other pixels are set to be preset gray values, so as to obtain the contour image.
By adopting the scheme, in the second screening process, the contour image is obtained by processing the image contour based on the image to be screened, the contour image with smaller data is required to be analyzed, part of the image is removed from the screening range based on the contour difference again, the calculated amount of subsequent calculation is reduced, and meanwhile, the image screening precision is improved.
In some embodiments of the present invention, the image to be rendered further includes a second rendering area surrounding all of the first rendering areas, and the step of rendering each of the first rendering areas based on the first converted pixel values to obtain a rendered image further includes determining a second converted pixel value based on the first similarity, and rendering the second rendering area based on the second converted pixel value.
By adopting the scheme, the first similarity calculated previously is continued to determine the second converted pixel value, the difference of the outline is fully utilized, a rendered image is constructed together based on the difference of the outline and the difference of the target image area, whether the image to be screened corresponding to the rendered image is an approximate image or not is determined based on the rendered image, and the accuracy of image judgment is improved.
In some embodiments of the present invention, in the step of calculating the gray-scale difference value based on the gray-scale value of the corresponding image region and the gray-scale value of the corresponding region of the target image, a difference between the average value of the gray-scale values of the target image region and the average value of the gray-scale values of the corresponding region of the target image of the image to be screened is calculated as the gray-scale difference value of the target image region.
By adopting the scheme, the target image area is provided with a plurality of areas, the difference of each area is determined based on the difference between the average value of the gray values of the target image area of the image to be screened and the average value of the gray values of the corresponding area of the target image, each first area to be rendered is rendered, and the difference of each key point is guaranteed to be included in calculation.
In some embodiments of the present invention, a correspondence between a gray difference value and a first converted pixel value is stored in the first lookup table, and in the step of determining the first converted pixel value corresponding to the gray difference value based on the preset first lookup table, the first converted pixel value corresponding to the gray difference value is determined based on the correspondence between the gray difference value and the first converted pixel value in the first lookup table.
In some embodiments of the present invention, the first neural network model is provided with a classifier, and in the step of inputting the rendered image into the pre-trained first neural network model to determine whether the image to be screened corresponding to the rendered image is an approximate image, the classifier of the first neural network model outputs an approximate value, and compares a preset approximation threshold with the approximate value to determine whether the image to be screened corresponding to the rendered image is an approximate image.
In a second aspect, an embodiment of the present application provides a network image data acquisition and analysis system, where the system includes a computer device, where the computer device includes a processor and a memory, where the memory stores computer instructions, and the processor is configured to execute the computer instructions stored in the memory, and when the computer instructions are executed by the processor, the apparatus implements steps implemented by the network image data acquisition and analysis method.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to the above-described specific ones, and that the above and other objects that can be achieved with the present invention will be more clearly understood from the following detailed description.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the description serve to explain, but do not limit the disclosure.
In the drawings:
FIG. 1 is a flow chart of one embodiment of a network image data acquisition and analysis method;
FIG. 2 is a flow chart of another embodiment of a network image data acquisition and analysis method;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Generally, the similarity of the image is directly calculated in the mode of the prior art, all pixel points are directly involved in calculation, the emphasis is not analyzed, and the calculation efficiency is low on the premise of large calculation amount.
Therefore, in order to reduce the calculation amount and improve the calculation accuracy, the application provides a network image data acquisition and analysis method and system.
Fig. 1 is a flow chart of a network image data acquisition and analysis method according to an embodiment of the present application.
As shown in fig. 1, an embodiment of the present application provides a network image data acquisition and analysis method, where the steps of the method include:
step S100, acquiring a target contour based on a target image, wherein the target contour comprises a contour peripheral line and a contour inner peripheral line, the contour peripheral line is a circle where a circle of pixel points are located when the target image extends inwards from the outermost circle of pixel points circle by circle, and the contour inner peripheral line is a circle where a circle of pixel points are located when the target image extends inwards from the outermost circle of pixel points circle by circle and finally contacts the circle of pixel points;
in the implementation process, the outermost pixel point of the target image is the pixel point at the outer edge of the image, and the pixel points at the outer edge of the image extend inwards from circle to circle, namely a plurality of circles.
Step S200, obtaining an image contour of each image to be screened in an image set to be screened, comparing the image contour with the target contour, and primarily screening the image set to be screened to obtain an updated image set to be screened;
in a specific implementation process, the image contour may be obtained by gradually extending from the outermost pixel point of the target image inward based on a preset gray value threshold, marking a pixel point as a contour point if the gray value of the pixel point is greater than the gray value threshold, marking all contour points based on the method, and forming an image contour by using contour points of two edges in each row and each column in the image as marking contour points.
By adopting the scheme, the contour points of the two edges in each row and each column in the image are used as the marked contour points, so that the clear contour points can be acquired, the problem that part of contours are missed in a general acquisition mode is solved, and finally, the classification accuracy is ensured.
Step S300, acquiring a corresponding image area from the updated images to be screened in the images to be screened based on a preset target image area, calculating a gray level difference value based on the gray level value of the corresponding image area and the gray level value of the corresponding area of the target image, and determining a first converted pixel value corresponding to the gray level difference value based on a preset first comparison table;
in the implementation process, the target image area is an area with a plurality of preset positions, and the image of the area is obtained from the area with the same position in the image to be screened.
Step S400, obtaining an image to be rendered, wherein the image to be rendered comprises first rendering areas corresponding to each target image area, and rendering each first rendering area based on the first converted pixel values to obtain a rendered image;
in the implementation process, each pixel point in the image to be rendered is the same preset gray value, and in the rendering process, the preset gray value is modified into the rendered gray value.
Step S500, the rendered image is input into a pre-trained first neural network model, and whether the image to be screened corresponding to the rendered image is an approximate image or not is determined.
By adopting the scheme, the image set to be screened is initially screened through the outline of the target image, the image with larger outline difference is discharged from the screening range, and the calculated amount of subsequent calculation is reduced; further, the scheme only needs to extract the image area of the target image area, calculates the gray level difference value based on the gray level value of the corresponding image area and the gray level value of the corresponding area of the target image, and builds the rendering image again, and finally processes the rendering image with only a small amount of computing elements by using the neural network model to judge whether the rendering image is an approximate image, so that the computing amount is reduced on one hand, and the computing accuracy is improved through analyzing the key positions on the other hand.
As shown in fig. 2, in some embodiments of the present invention, the steps of obtaining an image contour of each image to be screened in the image set to be screened, comparing the image contour with the target contour, and performing preliminary screening on the image set to be screened to obtain an updated image set to be screened include:
step S210, performing first screening on the images to be screened in the images to be screened based on the image contour and the target contour to obtain a first image set to be screened;
step S220, obtaining a contour image based on the image contour of each image to be screened in the first image set to be screened, inputting the contour image into a pre-trained first neural network model, outputting first similarity by the first neural network model, and performing second screening on the images to be screened based on the first similarity to obtain an updated image set to be screened.
By adopting the scheme, the method and the device have the advantages that the image set to be screened is screened twice in the process of updating the image set to be screened, the contours of the images are directly compared in the first screening process, the images with larger contour differences are discharged out of the screening range, in the second screening process, only the contour images with smaller data are required to be analyzed, part of the images are discharged out of the screening range again based on the contour differences, the calculated amount of subsequent calculation is reduced, and meanwhile, the image screening precision is improved.
In some embodiments of the present invention, in the step of performing the first screening on the to-be-screened image in the to-be-screened image set based on the image contour and the target contour, whether there is a coincidence between the image contour and an area surrounded by a contour peripheral line and a contour inner peripheral line of the target contour is determined, and if there is a coincidence, the to-be-screened image is selected into the first to-be-screened image set.
In the implementation process, if no coincidence exists, the image to be screened is not selected into the first image set to be screened.
By adopting the scheme, in the first screening process, the contours of the images are directly compared, the images with larger contour differences are removed from the screening range, the screening range can be finished by simple comparison, and on the premise of smaller calculated amount, the images with larger partial differences are deleted, so that the subsequent calculated amount is reduced.
In some embodiments of the present invention, in the step of determining whether there is a coincidence between the image contour and a region surrounded by a contour peripheral line and a contour inner peripheral line of the target contour;
determining a contour peripheral line to be screened and a contour inner peripheral line to be screened based on the image contour;
judging whether at least one of the contour peripheral line to be screened and the contour inner peripheral line to be screened is positioned in an area surrounded by the contour peripheral line and the contour inner peripheral line, if yes, overlapping exists; if not, no coincidence exists.
In the implementation process, if the contour peripheral line to be screened and the contour inner peripheral line to be screened are overlapped with the contour peripheral line or the contour inner peripheral line, judging that the contour peripheral line and the contour inner peripheral line are not located in the area surrounded by the contour peripheral line and the contour inner peripheral line.
In some embodiments of the present invention, in the step of obtaining a contour image based on the image contour of each image to be screened in the first image set to be screened, gray values of pixels where the image contour of the image to be screened is located are reserved, and gray values of other pixels are set to be preset gray values, so as to obtain the contour image.
By adopting the scheme, in the second screening process, the contour image is obtained by processing the image contour based on the image to be screened, the contour image with smaller data is required to be analyzed, part of the image is removed from the screening range based on the contour difference again, the calculated amount of subsequent calculation is reduced, and meanwhile, the image screening precision is improved.
In some embodiments of the present invention, the image to be rendered further includes a second rendering area surrounding all of the first rendering areas, and the step of rendering each of the first rendering areas based on the first converted pixel values to obtain a rendered image further includes determining a second converted pixel value based on the first similarity, and rendering the second rendering area based on the second converted pixel value.
In the implementation process, the second rendering area surrounds all the first rendering areas, the outline difference is reflected, meanwhile, the position relation between the outline and other target image areas is reflected, and the content of the image is further completely represented in the rendered image.
By adopting the scheme, the first similarity calculated previously is continued to determine the second converted pixel value, the difference of the outline is fully utilized, a rendered image is constructed together based on the difference of the outline and the difference of the target image area, whether the image to be screened corresponding to the rendered image is an approximate image or not is determined based on the rendered image, and the accuracy of image judgment is improved.
In some embodiments of the present invention, in the step of calculating the gray-scale difference value based on the gray-scale value of the corresponding image region and the gray-scale value of the corresponding region of the target image, a difference between the average value of the gray-scale values of the target image region and the average value of the gray-scale values of the corresponding region of the target image of the image to be screened is calculated as the gray-scale difference value of the target image region.
By adopting the scheme, the target image area is provided with a plurality of areas, the difference of each area is determined based on the difference between the average value of the gray values of the target image area of the image to be screened and the average value of the gray values of the corresponding area of the target image, each first area to be rendered is rendered, and the difference of each key point is guaranteed to be included in calculation.
In a specific implementation process, in the step of calculating the difference between the average value of the gray values of the target image area of the image to be screened and the average value of the gray values of the corresponding area of the target image, the absolute value of the difference between the average values is calculated after the difference between the average values is calculated.
By adopting the scheme, as the difference between the average values has a negative value in the actual calculation process, the absolute value is directly used for reflecting the difference, and the increase of the calculated amount caused by the negative value is avoided.
In some embodiments of the present invention, a correspondence between a gray difference value and a first converted pixel value is stored in the first lookup table, and in the step of determining the first converted pixel value corresponding to the gray difference value based on the preset first lookup table, the first converted pixel value corresponding to the gray difference value is determined based on the correspondence between the gray difference value and the first converted pixel value in the first lookup table.
In the specific implementation process, in the step of determining the second converted pixel value based on the first similarity, the second converted pixel value is determined based on a corresponding relationship between the first similarity and the second converted pixel value in a preset second comparison table.
In some embodiments of the present invention, the first neural network model is provided with a classifier, and in the step of inputting the rendered image into the pre-trained first neural network model to determine whether the image to be screened corresponding to the rendered image is an approximate image, the classifier of the first neural network model outputs an approximate value, and compares a preset approximation threshold with the approximate value to determine whether the image to be screened corresponding to the rendered image is an approximate image.
In a specific implementation process, the first neural network model and the second neural network model are both graph neural network models.
In the implementation process, if the approximation value is larger than the approximation threshold value, judging that the image to be screened is an approximation image; and if the approximation value is not greater than the approximation threshold value, judging that the image to be screened is not an approximation image.
In a second aspect, an embodiment of the present application provides a network image data acquisition and analysis system, where the system includes a computer device, where the computer device includes a processor and a memory, where the memory stores computer instructions, and the processor is configured to execute the computer instructions stored in the memory, and when the computer instructions are executed by the processor, the apparatus implements steps implemented by the network image data acquisition and analysis method.
In a third aspect, embodiments of the present application provide a computer readable storage medium, where computer program instructions are stored, where the computer program instructions, when executed by a processor, implement the network image data acquisition and analysis method described above.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 3, an embodiment of the present application provides an electronic device, including: a processor and a memory storing computer program instructions; the processor executes the computer program instructions to implement the network image data acquisition and analysis method.
The electronic device may include a processor 1201 and a memory 1202 storing computer program instructions.
In particular, the processor 1201 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present application.
Memory 1202 may include mass storage for data or instructions. By way of example, and not limitation, memory 1202 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the above. Memory 1202 may include removable or non-removable (or fixed) media where appropriate. Memory 1202 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 1202 is a non-volatile solid-state memory.
The memory may include Read Only Memory (ROM), random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors) it is operable to perform the operations described with reference to methods in accordance with aspects of the present disclosure.
The processor 1201 implements the method of determining the thermal runaway parameter of the battery of any of the above embodiments by reading and executing the computer program instructions stored in the memory 1202.
In one example, the electronic device may also include a communication interface 1203 and a bus 1210. As shown in fig. 3, the processor 1201, the memory 1202, and the communication interface 1203 are connected to each other via a bus 1210 and perform communication with each other.
The communication interface 1203 is mainly used for implementing communication between each module, device, unit and/or apparatus in the embodiments of the present application.
Bus 1210 includes hardware, software, or both, coupling components of an electronic device to each other. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 1210 may include one or more buses, where appropriate. Although embodiments of the present application describe and illustrate a particular bus, the present application contemplates any suitable bus or interconnect.
It should be clear that the present application is not limited to the particular arrangements and processes described above and illustrated in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be different from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, which are intended to be included in the scope of the present application.

Claims (7)

1. A method for collecting and analyzing network image data, the method comprising the steps of:
acquiring a target contour based on a target image, wherein the target contour comprises a contour peripheral line and a contour inner peripheral line, the contour peripheral line is a circle where a circle of pixel points are located when the target image extends inwards from the outermost circle of pixel points circle by circle, and the contour inner peripheral line is a circle where a circle of pixel points are located when the target image extends inwards from the outermost circle of pixel points circle by circle and finally contacts with the circle of pixel points;
acquiring an image contour of each image to be screened in an image set to be screened, comparing the image contour with the target contour, and primarily screening the image set to be screened to obtain an updated image set to be screened;
acquiring a corresponding image area from an updated image to be screened in an image set based on a preset target image area, calculating a gray difference value based on a gray value of the corresponding image area and a gray value of a corresponding area of a target image, calculating a difference between an average value of the gray values of the target image area of the image to be screened and an average value of the gray values of the corresponding area of the target image, determining a first converted pixel value corresponding to the gray difference value based on a preset first comparison table as the gray difference value of the target image area, wherein the first comparison table stores a corresponding relation between the gray difference value and the first converted pixel value, and determining a first converted pixel value corresponding to the gray difference value based on a corresponding relation between the gray difference value and the first converted pixel value in the first comparison table;
acquiring an image to be rendered, wherein the image to be rendered comprises a first rendering area corresponding to each target image area, rendering each first rendering area based on the first converted pixel values to obtain a rendering image, and further comprises a second rendering area surrounding all the first rendering areas, determining second converted pixel values based on the first similarity, and rendering the second rendering areas based on the second converted pixel values;
and inputting the rendered image into a pre-trained first neural network model, and determining whether an image to be screened corresponding to the rendered image is an approximate image.
2. The network image data acquisition and analysis method according to claim 1, wherein the step of acquiring an image contour of each image to be screened in the image set to be screened, comparing the image contour with the target contour, and performing preliminary screening on the image set to be screened to obtain an updated image set to be screened comprises:
performing first screening on the images to be screened in the images to be screened based on the image profile and the target profile to obtain a first image set to be screened;
obtaining contour images based on the image contour of each image to be screened in the first image set to be screened, inputting the contour images into a pre-trained first neural network model, outputting first similarity by the first neural network model, and performing second screening on the images to be screened based on the first similarity to obtain an updated image set to be screened.
3. The network image data acquisition and analysis method according to claim 2, wherein in the step of performing the first screening on the images to be screened in the image set to be screened based on the image contour and the target contour, it is determined whether there is a coincidence between the image contour and an area surrounded by a contour peripheral line and a contour inner peripheral line of the target contour, and if there is a coincidence, the image to be screened is selected into the first image set to be screened.
4. The network image data acquisition and analysis method according to claim 3, wherein in the step of determining whether there is a coincidence of the image contour with an area surrounded by a contour peripheral line and a contour inner peripheral line of the target contour;
determining a contour peripheral line to be screened and a contour inner peripheral line to be screened based on the image contour;
judging whether at least one of the contour peripheral line to be screened and the contour inner peripheral line to be screened is positioned in an area surrounded by the contour peripheral line and the contour inner peripheral line, if yes, overlapping exists; if not, no coincidence exists.
5. The network image data acquisition and analysis method according to claim 2, wherein in the step of obtaining a contour image based on the image contour of each image to be screened in the first image set to be screened, gray values of pixels where the image contour of the image to be screened is located are reserved, and gray values of the remaining pixels are set to be preset gray values, so that the contour image is obtained.
6. The network image data acquisition and analysis method according to claim 1, wherein the first neural network model is provided with a classifier, and in the step of inputting the rendered image into the pre-trained first neural network model to determine whether the image to be screened corresponding to the rendered image is an approximate image, the classifier of the first neural network model outputs an approximate value, and a preset approximation threshold is compared with the approximate value to determine whether the image to be screened corresponding to the rendered image is an approximate image.
7. A network image data acquisition and analysis system, characterized in that it comprises a computer device comprising a processor and a memory, said memory having stored therein computer instructions for executing the computer instructions stored in said memory, which when executed by the processor, means realize the steps realized by the network image data acquisition and analysis method according to any of claims 1-6.
CN202311217993.3A 2023-09-20 2023-09-20 Network image data acquisition and analysis method and system Active CN117274642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311217993.3A CN117274642B (en) 2023-09-20 2023-09-20 Network image data acquisition and analysis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311217993.3A CN117274642B (en) 2023-09-20 2023-09-20 Network image data acquisition and analysis method and system

Publications (2)

Publication Number Publication Date
CN117274642A CN117274642A (en) 2023-12-22
CN117274642B true CN117274642B (en) 2024-03-26

Family

ID=89217196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311217993.3A Active CN117274642B (en) 2023-09-20 2023-09-20 Network image data acquisition and analysis method and system

Country Status (1)

Country Link
CN (1) CN117274642B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298837A (en) * 2021-07-27 2021-08-24 南昌工程学院 Image edge extraction method and device, storage medium and equipment
CN115272565A (en) * 2022-07-18 2022-11-01 聚好看科技股份有限公司 Head three-dimensional model reconstruction method and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104182719B (en) * 2013-05-21 2017-06-30 宁波华易基业信息科技有限公司 A kind of image-recognizing method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298837A (en) * 2021-07-27 2021-08-24 南昌工程学院 Image edge extraction method and device, storage medium and equipment
CN115272565A (en) * 2022-07-18 2022-11-01 聚好看科技股份有限公司 Head three-dimensional model reconstruction method and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
人脸图像灰度分布统计分析与检测特征设计;欧凡;刘冲;;大连理工大学学报;20100715(04);第65-71页 *

Also Published As

Publication number Publication date
CN117274642A (en) 2023-12-22

Similar Documents

Publication Publication Date Title
US11887064B2 (en) Deep learning-based system and method for automatically determining degree of damage to each area of vehicle
CN111292305B (en) Improved YOLO-V3 metal processing surface defect detection method
CN109657716B (en) Vehicle appearance damage identification method based on deep learning
EP3176751B1 (en) Information processing device, information processing method, computer-readable recording medium, and inspection system
WO2020177470A1 (en) Verification code recognition method and apparatus, terminal, and storage medium
WO2020186790A1 (en) Vehicle model identification method, device, computer apparatus, and storage medium
CN109740609B (en) Track gauge detection method and device
CN112580643A (en) License plate recognition method and device based on deep learning and storage medium
CN111652209B (en) Damage detection method, device, electronic equipment and medium
CN116245876B (en) Defect detection method, device, electronic apparatus, storage medium, and program product
CN113724243A (en) Image processing method, image processing device, electronic equipment and storage medium
CN117094975A (en) Method and device for detecting surface defects of steel and electronic equipment
CN112085721A (en) Damage assessment method, device and equipment for flooded vehicle based on artificial intelligence and storage medium
CN109712134B (en) Iris image quality evaluation method and device and electronic equipment
CN117274642B (en) Network image data acquisition and analysis method and system
CN113780492A (en) Two-dimensional code binarization method, device and equipment and readable storage medium
CN111524171B (en) Image processing method and device and electronic equipment
CN117315730A (en) Palm vein-based user identification method, palm vein-based user identification system, computer and storage medium
CN112966618A (en) Dressing identification method, device, equipment and computer readable medium
CN115083008A (en) Moving object detection method, device, equipment and storage medium
CN115984786A (en) Vehicle damage detection method and device, terminal and storage medium
CN115731179A (en) Track component detection method, terminal and storage medium
CN114972540A (en) Target positioning method and device, electronic equipment and storage medium
CN111753723B (en) Fingerprint identification method and device based on density calibration
CN114677552A (en) Fingerprint detail database labeling method and system for deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant