CN116630643A - Pixel splitting method and device based on image object boundary recognition - Google Patents

Pixel splitting method and device based on image object boundary recognition Download PDF

Info

Publication number
CN116630643A
CN116630643A CN202310589156.7A CN202310589156A CN116630643A CN 116630643 A CN116630643 A CN 116630643A CN 202310589156 A CN202310589156 A CN 202310589156A CN 116630643 A CN116630643 A CN 116630643A
Authority
CN
China
Prior art keywords
splitting
pixel
result
identification result
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310589156.7A
Other languages
Chinese (zh)
Inventor
温建伟
邓迪旻
袁潮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhuohe Technology Co Ltd
Original Assignee
Beijing Zhuohe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuohe Technology Co Ltd filed Critical Beijing Zhuohe Technology Co Ltd
Priority to CN202310589156.7A priority Critical patent/CN116630643A/en
Publication of CN116630643A publication Critical patent/CN116630643A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a pixel splitting method and device based on image object boundary identification. Wherein the method comprises the following steps: acquiring original image data; splitting the original image data according to a preset algorithm to obtain a split image; identifying object boundary information of the split image by using a Canny operator to obtain an identification result; and generating a pixel splitting result according to the identification result and the motion parameter. The application solves the technical problems that in the actual application of the prior art, complex boundary conditions exist in the target object part in the image, and the pixel splitting cannot be accurately carried out only by identifying the whole pixel or simply judging the model, so that some boundary identification technologies are needed to realize the fine splitting of the pixel.

Description

Pixel splitting method and device based on image object boundary recognition
Technical Field
The application relates to the field of image boundary processing, in particular to a pixel splitting method and device based on image object boundary recognition.
Background
Along with the continuous development of intelligent science and technology, intelligent equipment is increasingly used in life, work and study of people, and the quality of life of people is improved and the learning and working efficiency of people is increased by using intelligent science and technology means.
Currently, image pixel splitting operation is generally needed in the field of image processing, and image pixel splitting is a common technical means, and the method can divide pixels in an image into different object parts so as to perform subsequent analysis and processing. However, in the practical application of the prior art, there is often a complex boundary condition of the target object portion in the image, and the pixel splitting cannot be accurately performed only by the whole pixel or the simple judgment model identification, which requires some boundary identification technologies to achieve the fine splitting of the pixel.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides a pixel splitting method and a pixel splitting device based on image object boundary recognition, which at least solve the technical problems that in the practical application of the prior art, complex boundary conditions exist in a target object part in an image, and the pixel splitting cannot be accurately performed only by recognizing an integral pixel or a simple judgment model, so that the fine splitting of the pixel is realized by adopting some boundary recognition technologies.
According to an aspect of an embodiment of the present application, there is provided a pixel splitting method based on image object boundary recognition, including: acquiring original image data; splitting the original image data according to a preset algorithm to obtain a split image; identifying object boundary information of the split image by using a Canny operator to obtain an identification result; and generating a pixel splitting result according to the identification result and the motion parameter.
Optionally, after the identifying object boundary information of the split image by using the Canny operator, the method further includes: and carrying out refinement treatment on the identification result.
Optionally, the generating the pixel splitting result according to the identification result and the motion parameter includes: generating the motion parameters by using a Horn-Schunck optical flow field algorithm according to the identification result, wherein the motion parameters comprise: speed and direction of movement; and according to the movement speed and the movement direction, carrying out pixel splitting on the identification result to obtain the pixel splitting result.
Optionally, after the pixel splitting is performed on the identification result according to the motion speed and the motion direction to obtain the pixel splitting result, the method further includes: and classifying and marking the pixel splitting result.
According to another aspect of the embodiment of the present application, there is also provided a pixel splitting apparatus based on image object boundary recognition, including: the acquisition module is used for acquiring the original image data; the splitting module is used for splitting the original image data according to a preset algorithm to obtain a split image; the recognition module is used for recognizing object boundary information of the split image by using a Canny operator to obtain a recognition result; and the generating module is used for generating a pixel splitting result according to the identification result and the motion parameter.
Optionally, the apparatus further includes: and the processing module is used for carrying out refinement processing on the identification result.
Optionally, the generating module includes: the generation unit is used for generating the motion parameters by using a Horn-Schunck optical flow field algorithm according to the identification result, wherein the motion parameters comprise: speed and direction of movement; and the splitting unit is used for splitting the pixels of the identification result according to the movement speed and the movement direction to obtain the pixel splitting result.
Optionally, the generating module further includes: and the classifying unit is used for classifying and marking the pixel splitting result.
According to another aspect of the embodiment of the present application, there is further provided a nonvolatile storage medium, where the nonvolatile storage medium includes a stored program, and when the program runs, the program controls a device in which the nonvolatile storage medium is located to execute a pixel splitting method based on image object boundary identification.
According to another aspect of the embodiment of the present application, there is also provided an electronic device including a processor and a memory; the memory stores computer readable instructions, and the processor is configured to execute the computer readable instructions, where the computer readable instructions execute a pixel splitting method based on image object boundary recognition.
In the embodiment of the application, the original image data is acquired; splitting the original image data according to a preset algorithm to obtain a split image; identifying object boundary information of the split image by using a Canny operator to obtain an identification result; according to the identification result and the motion parameter, the pixel splitting result is generated, so that the technical problem that in the actual application of the prior art, complex boundary conditions exist in a target object part in an image, and the pixel splitting cannot be accurately performed only by identifying the whole pixel or a simple judgment model, and the fine splitting of the pixel is realized by adopting some boundary identification technologies is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of a pixel splitting method based on image object boundary identification according to an embodiment of the application;
FIG. 2 is a block diagram of a pixel splitting apparatus based on image object boundary recognition according to an embodiment of the present application;
fig. 3 is a block diagram of a terminal device for performing the method according to the application according to an embodiment of the application;
fig. 4 is a memory unit for holding or carrying program code for implementing a method according to the application, according to an embodiment of the application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the present application, there is provided a method embodiment of a pixel splitting method based on image object boundary recognition, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different from that herein.
Example 1
Fig. 1 is a flowchart of a pixel splitting method based on image object boundary recognition according to an embodiment of the present application, as shown in fig. 1, the method includes the steps of:
step S102, original image data is acquired.
Specifically, in order to solve the technical problem that in the practical application of the prior art, complex boundary conditions often exist in a target object part in an image, pixel splitting cannot be accurately performed only through identification of an integral pixel or a simple judgment model, and therefore fine splitting of the pixel needs to be achieved by adopting some boundary identification technologies, original image data needs to be acquired by using camera equipment, and the original image data is carded and transmitted for pixel splitting processing of the image in a later embodiment.
Step S104, splitting the original image data according to a preset algorithm to obtain a split image.
Specifically, on the basis of original image data acquisition, the embodiment of the application needs to split the original image data according to a preset algorithm, obtain split images with a plurality of sub-image data after splitting, and utilize the split images to carry out boundary recognition and parameter calculation in a subsequent method flow.
And S106, identifying object boundary information of the split image by using a Canny operator to obtain an identification result.
Specifically, the embodiment of the application utilizes the Canny operator to identify the boundary information of a plurality of sub-image data in the split image, so as to obtain the boundary identification result of the sub-image data, and the sub-image data can be conveniently classified, classified and marked subsequently.
In the embodiment of the application, in the identification process of the split image, firstly, three-dimensional gradient parameters of a Canny algorithm are required to be acquired, wherein the three-dimensional gradient parameters comprise: horizontal parameters, vertical parameters, diagonal parameters; marking the optimized image data according to a preset hysteresis threshold and the three-dimensional gradient parameters to obtain the boundary marking data, wherein a Canny algorithm comprises noise reduction, and any edge detection algorithm cannot be processed well on raw data, so that the first step is to convolve the raw data with a Gaussian smoothing template, and the obtained image is slightly blurred compared with the raw image. In this way, the noise of a single pixel becomes hardly affected on the image subjected to gaussian smoothing. Looking for gradients, edges in the image may point in different directions, so the Canny algorithm uses 4 masks to detect edges in horizontal, vertical, and diagonal directions. The convolution of the original image with each mask is stored. For each point we identify the maximum at this point and the direction of the generated edge. Thus we generate from the original image each luminance gradient map in the image and the direction of the luminance gradient. Tracking edges, higher luminance gradient ratios are more likely to be edges, but there is no exact value to define how large a luminance gradient is whether an edge is large or not, so Canny uses a hysteresis threshold that requires two thresholds, a high threshold and a low threshold. Assuming that the significant edges in the image are all continuous curves, we can track the blurred portions of a given curve and avoid having noise pixels that do not make up the curve as edges. Embodiments of the present application can begin with a larger threshold, identify the true edges that we are convincing, use the previously derived direction information, and track the entire edge in the image from these true edges. In tracking, embodiments of the present application may use a small threshold value, so that the blurred portion of the curve may be tracked until it returns to the starting point, and once this is done, a binary image is obtained, each point indicating whether it is an edge point.
Optionally, after the identifying object boundary information of the split image by using the Canny operator, the method further includes: and carrying out refinement treatment on the identification result.
Specifically, in order to obtain the boundary information recognition result, that is, the object boundary information of the split image, for higher analysis and classification, the recognition result needs to be subjected to refinement processing, wherein the refinement processing includes sharpening processing and noise reduction processing, and the image data after the noise reduction processing is processed and analyzed, so that the subsequent combination of motion parameters is facilitated to generate a final pixel splitting result.
And S108, generating a pixel splitting result according to the identification result and the motion parameter.
Optionally, the generating the pixel splitting result according to the identification result and the motion parameter includes: generating the motion parameters by using a Horn-Schunck optical flow field algorithm according to the identification result, wherein the motion parameters comprise: speed and direction of movement; and according to the movement speed and the movement direction, carrying out pixel splitting on the identification result to obtain the pixel splitting result.
Specifically, in order to convert the recognition result of the boundary information into the pixel splitting result, the embodiment of the application needs to generate the motion parameters by using a Horn-Schunck optical flow field algorithm according to the recognition result, wherein the motion parameters comprise: speed and direction of movement; according to the motion speed and the motion direction, the recognition result is subjected to pixel splitting to obtain the pixel splitting result, wherein for the embodiment of the application, an optical flow field technology can enable all pixel points in a continuous picture to form a two-dimensional (2D) instantaneous speed field at a certain moment, the two-dimensional speed vector is the mapping of all three-dimensional speed vectors of a visible point in an image on an imaging surface, the optical flow does not contain motion information of an object and information about a three-dimensional structure, motion parameters can be detected under the condition of eliminating scene interference information, furthermore, in the embodiment of the application, the optical flow method is utilized to detect the moving object to obtain the pixel splitting result, a speed vector (optical flow) is required to be endowed to each pixel point in the image in the implementation process, so that an optical flow field is formed, if no moving object is extracted in the image, the optical flow field is continuous and uniform, if the moving object is in the image, the optical flow of the moving object and the optical flow field of the image are different, the optical flow field is not continuous and uniform, and the moving object and the position can be detected. The calculation of the current optical flow field is mainly divided into a gradient-based method, a matching-based method, an energy-based method, a phase-based method and the like, and the specific calculation mode is adopted, so that the specific regulation is not carried out in the embodiment of the application, but the calculation mode is determined according to the actual application scene of the embodiment of the application.
Optionally, after the pixel splitting is performed on the identification result according to the motion speed and the motion direction to obtain the pixel splitting result, the method further includes: and classifying and marking the pixel splitting result.
Through the embodiment, the technical problem that in the actual application of the prior art, complex boundary conditions exist in a target object part in an image, and pixel splitting cannot be accurately performed only through integral pixel or simple judgment model identification is solved, and the technical problem that some boundary identification technologies are needed to achieve fine pixel splitting is solved.
Example two
Fig. 2 is a block diagram of a pixel splitting apparatus based on image object boundary recognition according to an embodiment of the present application, and as shown in fig. 2, the apparatus includes:
the acquisition module 20 is used for acquiring the original image data.
Specifically, in order to solve the technical problem that in the practical application of the prior art, complex boundary conditions often exist in a target object part in an image, pixel splitting cannot be accurately performed only through identification of an integral pixel or a simple judgment model, and therefore fine splitting of the pixel needs to be achieved by adopting some boundary identification technologies, original image data needs to be acquired by using camera equipment, and the original image data is carded and transmitted for pixel splitting processing of the image in a later embodiment.
The splitting module 22 is configured to split the original image data according to a preset algorithm, so as to obtain a split image.
Specifically, on the basis of original image data acquisition, the embodiment of the application needs to split the original image data according to a preset algorithm, obtain split images with a plurality of sub-image data after splitting, and utilize the split images to carry out boundary recognition and parameter calculation in a subsequent method flow.
And the identifying module 24 is used for identifying object boundary information of the split image by using a Canny operator to obtain an identification result.
Specifically, the embodiment of the application utilizes the Canny operator to identify the boundary information of a plurality of sub-image data in the split image, so as to obtain the boundary identification result of the sub-image data, and the sub-image data can be conveniently classified, classified and marked subsequently.
In the embodiment of the application, in the identification process of the split image, firstly, three-dimensional gradient parameters of a Canny algorithm are required to be acquired, wherein the three-dimensional gradient parameters comprise: horizontal parameters, vertical parameters, diagonal parameters; marking the optimized image data according to a preset hysteresis threshold and the three-dimensional gradient parameters to obtain the boundary marking data, wherein a Canny algorithm comprises noise reduction, and any edge detection algorithm cannot be processed well on raw data, so that the first step is to convolve the raw data with a Gaussian smoothing template, and the obtained image is slightly blurred compared with the raw image. In this way, the noise of a single pixel becomes hardly affected on the image subjected to gaussian smoothing. Looking for gradients, edges in the image may point in different directions, so the Canny algorithm uses 4 masks to detect edges in horizontal, vertical, and diagonal directions. The convolution of the original image with each mask is stored. For each point we identify the maximum at this point and the direction of the generated edge. Thus we generate from the original image each luminance gradient map in the image and the direction of the luminance gradient. Tracking edges, higher luminance gradient ratios are more likely to be edges, but there is no exact value to define how large a luminance gradient is whether an edge is large or not, so Canny uses a hysteresis threshold that requires two thresholds, a high threshold and a low threshold. Assuming that the significant edges in the image are all continuous curves, we can track the blurred portions of a given curve and avoid having noise pixels that do not make up the curve as edges. Embodiments of the present application can begin with a larger threshold, identify the true edges that we are convincing, use the previously derived direction information, and track the entire edge in the image from these true edges. In tracking, embodiments of the present application may use a small threshold value, so that the blurred portion of the curve may be tracked until it returns to the starting point, and once this is done, a binary image is obtained, each point indicating whether it is an edge point.
Optionally, the apparatus further includes: and the processing module is used for carrying out refinement processing on the identification result.
Specifically, in order to obtain the boundary information recognition result, that is, the object boundary information of the split image, for higher analysis and classification, the recognition result needs to be subjected to refinement processing, wherein the refinement processing includes sharpening processing and noise reduction processing, and the image data after the noise reduction processing is processed and analyzed, so that the subsequent combination of motion parameters is facilitated to generate a final pixel splitting result.
And the generating module 26 is configured to generate a pixel splitting result according to the identification result and the motion parameter.
Optionally, the generating module includes: the generation unit is used for generating the motion parameters by using a Horn-Schunck optical flow field algorithm according to the identification result, wherein the motion parameters comprise: speed and direction of movement; and the splitting unit is used for splitting the pixels of the identification result according to the movement speed and the movement direction to obtain the pixel splitting result.
Specifically, in order to convert the recognition result of the boundary information into the pixel splitting result, the embodiment of the application needs to generate the motion parameters by using a Horn-Schunck optical flow field algorithm according to the recognition result, wherein the motion parameters comprise: speed and direction of movement; according to the motion speed and the motion direction, the recognition result is subjected to pixel splitting to obtain the pixel splitting result, wherein for the embodiment of the application, an optical flow field technology can enable all pixel points in a continuous picture to form a two-dimensional (2D) instantaneous speed field at a certain moment, the two-dimensional speed vector is the mapping of all three-dimensional speed vectors of a visible point in an image on an imaging surface, the optical flow does not contain motion information of an object and information about a three-dimensional structure, motion parameters can be detected under the condition of eliminating scene interference information, furthermore, in the embodiment of the application, the optical flow method is utilized to detect the moving object to obtain the pixel splitting result, a speed vector (optical flow) is required to be endowed to each pixel point in the image in the implementation process, so that an optical flow field is formed, if no moving object is extracted in the image, the optical flow field is continuous and uniform, if the moving object is in the image, the optical flow of the moving object and the optical flow field of the image are different, the optical flow field is not continuous and uniform, and the moving object and the position can be detected. The calculation of the current optical flow field is mainly divided into a gradient-based method, a matching-based method, an energy-based method, a phase-based method and the like, and the specific calculation mode is adopted, so that the specific regulation is not carried out in the embodiment of the application, but the calculation mode is determined according to the actual application scene of the embodiment of the application.
Optionally, the generating module further includes: and the classifying unit is used for classifying and marking the pixel splitting result.
Through the embodiment, the technical problem that in the actual application of the prior art, complex boundary conditions exist in a target object part in an image, and pixel splitting cannot be accurately performed only through integral pixel or simple judgment model identification is solved, and the technical problem that some boundary identification technologies are needed to achieve fine pixel splitting is solved.
According to another aspect of the embodiment of the present application, there is further provided a nonvolatile storage medium, where the nonvolatile storage medium includes a stored program, and when the program runs, the program controls a device in which the nonvolatile storage medium is located to execute a pixel splitting method based on image object boundary identification.
Specifically, the method comprises the following steps: acquiring original image data; splitting the original image data according to a preset algorithm to obtain a split image; identifying object boundary information of the split image by using a Canny operator to obtain an identification result; and generating a pixel splitting result according to the identification result and the motion parameter. Optionally, after the identifying object boundary information of the split image by using the Canny operator, the method further includes: and carrying out refinement treatment on the identification result. Optionally, the generating the pixel splitting result according to the identification result and the motion parameter includes: generating the motion parameters by using a Horn-Schunck optical flow field algorithm according to the identification result, wherein the motion parameters comprise: speed and direction of movement; and according to the movement speed and the movement direction, carrying out pixel splitting on the identification result to obtain the pixel splitting result. Optionally, after the pixel splitting is performed on the identification result according to the motion speed and the motion direction to obtain the pixel splitting result, the method further includes: and classifying and marking the pixel splitting result.
According to another aspect of the embodiment of the present application, there is also provided an electronic device including a processor and a memory; the memory stores computer readable instructions, and the processor is configured to execute the computer readable instructions, where the computer readable instructions execute a pixel splitting method based on image object boundary recognition.
Specifically, the method comprises the following steps: acquiring original image data; splitting the original image data according to a preset algorithm to obtain a split image; identifying object boundary information of the split image by using a Canny operator to obtain an identification result; and generating a pixel splitting result according to the identification result and the motion parameter. Optionally, after the identifying object boundary information of the split image by using the Canny operator, the method further includes: and carrying out refinement treatment on the identification result. Optionally, the generating the pixel splitting result according to the identification result and the motion parameter includes: generating the motion parameters by using a Horn-Schunck optical flow field algorithm according to the identification result, wherein the motion parameters comprise: speed and direction of movement; and according to the movement speed and the movement direction, carrying out pixel splitting on the identification result to obtain the pixel splitting result. Optionally, after the pixel splitting is performed on the identification result according to the motion speed and the motion direction to obtain the pixel splitting result, the method further includes: and classifying and marking the pixel splitting result.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, fig. 3 is a schematic hardware structure of a terminal device according to an embodiment of the present application. As shown in fig. 3, the terminal device may include an input device 30, a processor 31, an output device 32, a memory 33, and at least one communication bus 34. The communication bus 34 is used to enable communication connections between the elements. The memory 33 may comprise a high-speed RAM memory or may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, in which various programs may be stored for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 31 may be implemented as, for example, a central processing unit (Central Processing Unit, abbreviated as CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 31 is coupled to the input device 30 and the output device 32 through wired or wireless connections.
Alternatively, the input device 30 may include a variety of input devices, for example, may include at least one of a user-oriented user interface, a device-oriented device interface, a programmable interface of software, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware insertion interface (such as a USB interface, a serial port, etc.) for data transmission between devices; alternatively, the user-oriented user interface may be, for example, a user-oriented control key, a voice input device for receiving voice input, and a touch-sensitive device (e.g., a touch screen, a touch pad, etc. having touch-sensitive functionality) for receiving user touch input by a user; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, for example, an input pin interface or an input interface of a chip, etc.; optionally, the transceiver may be a radio frequency transceiver chip, a baseband processing chip, a transceiver antenna, etc. with a communication function. An audio input device such as a microphone may receive voice data. The output device 32 may include a display, audio, or the like.
In this embodiment, the processor of the terminal device may include functions for executing each module of the data processing apparatus in each device, and specific functions and technical effects may be referred to the above embodiments and are not described herein again.
Fig. 4 is a schematic hardware structure of a terminal device according to another embodiment of the present application. Fig. 4 is a specific embodiment of the implementation of fig. 3. As shown in fig. 4, the terminal device of the present embodiment includes a processor 41 and a memory 42.
The processor 41 executes the computer program code stored in the memory 42 to implement the methods of the above-described embodiments.
The memory 42 is configured to store various types of data to support operation at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, video, etc. The memory 42 may include a random access memory (random access memory, simply referred to as RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, a processor 41 is provided in the processing assembly 40. The terminal device may further include: a communication component 43, a power supply component 44, a multimedia component 45, an audio component 46, an input/output interface 47 and/or a sensor component 48. The components and the like specifically included in the terminal device are set according to actual requirements, which are not limited in this embodiment.
The processing component 40 generally controls the overall operation of the terminal device. The processing component 40 may include one or more processors 41 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 40 may include one or more modules that facilitate interactions between the processing component 40 and other components. For example, processing component 40 may include a multimedia module to facilitate interaction between multimedia component 45 and processing component 40.
The power supply assembly 44 provides power to the various components of the terminal device. Power supply components 44 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for terminal devices.
The multimedia component 45 comprises a display screen between the terminal device and the user providing an output interface. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation.
The audio component 46 is configured to output and/or input audio signals. For example, the audio component 46 includes a Microphone (MIC) configured to receive external audio signals when the terminal device is in an operational mode, such as a speech recognition mode. The received audio signals may be further stored in the memory 42 or transmitted via the communication component 43. In some embodiments, audio assembly 46 further includes a speaker for outputting audio signals.
The input/output interface 47 provides an interface between the processing assembly 40 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: volume button, start button and lock button.
The sensor assembly 48 includes one or more sensors for providing status assessment of various aspects for the terminal device. For example, the sensor assembly 48 may detect the open/closed state of the terminal device, the relative positioning of the assembly, the presence or absence of user contact with the terminal device. The sensor assembly 48 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 48 may also include a camera or the like.
The communication component 43 is configured to facilitate communication between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot, where the SIM card slot is used to insert a SIM card, so that the terminal device may log into a GPRS network, and establish communication with a server through the internet.
From the above, it will be appreciated that the communication component 43, the audio component 46, and the input/output interface 47, the sensor component 48 referred to in the embodiment of fig. 4 may be implemented as an input device in the embodiment of fig. 3.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (10)

1. A pixel splitting method based on image object boundary recognition, comprising:
acquiring original image data;
splitting the original image data according to a preset algorithm to obtain a split image;
identifying object boundary information of the split image by using a Canny operator to obtain an identification result;
and generating a pixel splitting result according to the identification result and the motion parameter.
2. The method of claim 1, wherein after identifying object boundary information of the split image using a Canny operator, the method further comprises:
and carrying out refinement treatment on the identification result.
3. The method of claim 1, wherein generating a pixel resolution result based on the recognition result and a motion parameter comprises:
generating the motion parameters by using a Horn-Schunck optical flow field algorithm according to the identification result, wherein the motion parameters comprise: speed and direction of movement;
and according to the movement speed and the movement direction, carrying out pixel splitting on the identification result to obtain the pixel splitting result.
4. A method according to claim 3, wherein after said pixel splitting of said identification result according to said movement speed and said movement direction, said method further comprises:
and classifying and marking the pixel splitting result.
5. A pixel splitting apparatus based on image object boundary recognition, comprising:
the acquisition module is used for acquiring the original image data;
the splitting module is used for splitting the original image data according to a preset algorithm to obtain a split image;
the recognition module is used for recognizing object boundary information of the split image by using a Canny operator to obtain a recognition result;
and the generating module is used for generating a pixel splitting result according to the identification result and the motion parameter.
6. The apparatus of claim 5, wherein the apparatus further comprises:
and the processing module is used for carrying out refinement processing on the identification result.
7. The apparatus of claim 5, wherein the generating module comprises:
the generation unit is used for generating the motion parameters by using a Horn-Schunck optical flow field algorithm according to the identification result, wherein the motion parameters comprise: speed and direction of movement;
and the splitting unit is used for splitting the pixels of the identification result according to the movement speed and the movement direction to obtain the pixel splitting result.
8. The apparatus of claim 7, wherein the generating module further comprises:
and the classifying unit is used for classifying and marking the pixel splitting result.
9. A non-volatile storage medium, characterized in that the non-volatile storage medium comprises a stored program, wherein the program, when run, controls a device in which the non-volatile storage medium is located to perform the method of any one of claims 1 to 4.
10. An electronic device comprising a processor and a memory; the memory has stored therein computer readable instructions for executing the processor, wherein the computer readable instructions when executed perform the method of any of claims 1 to 4.
CN202310589156.7A 2023-05-23 2023-05-23 Pixel splitting method and device based on image object boundary recognition Pending CN116630643A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310589156.7A CN116630643A (en) 2023-05-23 2023-05-23 Pixel splitting method and device based on image object boundary recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310589156.7A CN116630643A (en) 2023-05-23 2023-05-23 Pixel splitting method and device based on image object boundary recognition

Publications (1)

Publication Number Publication Date
CN116630643A true CN116630643A (en) 2023-08-22

Family

ID=87602036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310589156.7A Pending CN116630643A (en) 2023-05-23 2023-05-23 Pixel splitting method and device based on image object boundary recognition

Country Status (1)

Country Link
CN (1) CN116630643A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885057A (en) * 2014-03-20 2014-06-25 西安电子科技大学 Self-adaptation variable-sliding-window multi-target tracking method
CN113808037A (en) * 2021-09-02 2021-12-17 深圳东辉盛扬科技有限公司 Image optimization method and device
CN114581855A (en) * 2022-04-29 2022-06-03 深圳格隆汇信息科技有限公司 Information collection method and system based on big data
CN115249241A (en) * 2022-07-26 2022-10-28 武汉逸飞激光股份有限公司 Gluing defect detection method and device
CN115426525A (en) * 2022-09-05 2022-12-02 北京拙河科技有限公司 High-speed moving frame based linkage image splitting method and device
CN115527045A (en) * 2022-09-21 2022-12-27 北京拙河科技有限公司 Image identification method and device for snow field danger identification
CN115797300A (en) * 2022-12-06 2023-03-14 珠海市睿晶聚源科技有限公司 Edge detection method and device based on adaptive gradient threshold canny operator
CN115797770A (en) * 2022-12-06 2023-03-14 中国人民解放军海军工程大学 Continuous image target detection method, system and terminal considering relative movement of target

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885057A (en) * 2014-03-20 2014-06-25 西安电子科技大学 Self-adaptation variable-sliding-window multi-target tracking method
CN113808037A (en) * 2021-09-02 2021-12-17 深圳东辉盛扬科技有限公司 Image optimization method and device
CN114581855A (en) * 2022-04-29 2022-06-03 深圳格隆汇信息科技有限公司 Information collection method and system based on big data
CN115249241A (en) * 2022-07-26 2022-10-28 武汉逸飞激光股份有限公司 Gluing defect detection method and device
CN115426525A (en) * 2022-09-05 2022-12-02 北京拙河科技有限公司 High-speed moving frame based linkage image splitting method and device
CN115527045A (en) * 2022-09-21 2022-12-27 北京拙河科技有限公司 Image identification method and device for snow field danger identification
CN115797300A (en) * 2022-12-06 2023-03-14 珠海市睿晶聚源科技有限公司 Edge detection method and device based on adaptive gradient threshold canny operator
CN115797770A (en) * 2022-12-06 2023-03-14 中国人民解放军海军工程大学 Continuous image target detection method, system and terminal considering relative movement of target

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何伟 等: "结合运动边界和稀疏光流的运动目标检测方法", 《小型微型计算机系统》, vol. 38, no. 03, 31 March 2017 (2017-03-31), pages 635 - 639 *
薄一航: "《虚拟空间交互艺术设计》", vol. 1, 31 December 2020, 中国戏剧出版社, pages: 101 - 107 *

Similar Documents

Publication Publication Date Title
EP3736766A1 (en) Method and device for blurring image background, storage medium, and electronic apparatus
CN108875451B (en) Method, device, storage medium and program product for positioning image
TWI525555B (en) Image processing apparatus and processing method thereof
CN105469356A (en) Human face image processing method and apparatus thereof
CN105354793A (en) Facial image processing method and device
CN110009555B (en) Image blurring method and device, storage medium and electronic equipment
EP3231175B1 (en) System and method for processing depth images which capture an interaction of an object relative to an interaction plane
CN115631122A (en) Image optimization method and device for edge image algorithm
CN115293985B (en) Super-resolution noise reduction method and device for image optimization
CN104899588A (en) Method and device for recognizing characters in image
CN115623336B (en) Image tracking method and device for hundred million-level camera equipment
CN116630643A (en) Pixel splitting method and device based on image object boundary recognition
CN115578290A (en) Image refining method and device based on high-precision shooting matrix
CN114866702A (en) Multi-auxiliary linkage camera shooting technology-based border monitoring and collecting method and device
CN114565962A (en) Face image processing method and device, electronic equipment and storage medium
CN116664413B (en) Image volume fog eliminating method and device based on Abbe convergence operator
CN116468883B (en) High-precision image data volume fog recognition method and device
CN116228593B (en) Image perfecting method and device based on hierarchical antialiasing
CN116579965B (en) Multi-image fusion method and device
CN116744102B (en) Ball machine tracking method and device based on feedback adjustment
CN116088580B (en) Flying object tracking method and device
CN116402935B (en) Image synthesis method and device based on ray tracing algorithm
CN115511735B (en) Snow field gray scale picture optimization method and device
CN115914819B (en) Picture capturing method and device based on orthogonal decomposition algorithm
CN116579964B (en) Dynamic frame gradual-in gradual-out dynamic fusion method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination