| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Image Processing

Page history last edited by Dmitry Sokolov 3 weeks, 4 days ago
Go:  Visual Taxonomy Links   Hide/Show:

Taxonomy Path

Top > Science > Technology > Information Technology > Software > Image Processing


https://en.wikipedia.org/wiki/Image_processing

This article is about mathematical processing of digital images. For artistic processing of images, see Image editing. For compression algorithms, see Image compression.

Not to be confused with Analog image processing.

Digital image processing is the use of a digital computer to process digital images through an algorithm.[1][2] As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of multidimensional systems. The generation and development of digital image processing are mainly affected by three factors: first, the development of computers; second, the development of mathematics (especially the creation and improvement of discrete mathematics theory); third, the demand for a wide range of applications in environment, agriculture, military, industry and medical science has increased.

History

Further information: Digital image § History, and Digital imaging § History

Many of the techniques of digital image processing, or digital picture processing as it often was called, were developed in the 1960s, at Bell Laboratories, the Jet Propulsion Laboratory, Massachusetts Institute of Technology, University of Maryland, and a few other research facilities, with application to satellite imagery, wire-photo standards conversion, medical imaging, videophone, character recognition, and photograph enhancement.[3] The purpose of early image processing was to improve the quality of the image. It was aimed for human beings to improve the visual effect of people. In image processing, the input is a low-quality image, and the output is an image with improved quality. Common image processing include image enhancement, restoration, encoding, and compression. The first successful application was the American Jet Propulsion Laboratory (JPL). They used image processing techniques such as geometric correction, gradation transformation, noise removal, etc. on the thousands of lunar photos sent back by the Space Detector Ranger 7 in 1964, taking into account the position of the Sun and the environment of the Moon. The impact of the successful mapping of the Moon's surface map by the computer has been a success. Later, more complex image processing was performed on the nearly 100,000 photos sent back by the spacecraft, so that the topographic map, color map and panoramic mosaic of the Moon were obtained, which achieved extraordinary results and laid a solid foundation for human landing on the Moon.[4]

The cost of processing was fairly high, however, with the computing equipment of that era. That changed in the 1970s, when digital image processing proliferated as cheaper computers and dedicated hardware became available. This led to images being processed in real-time, for some dedicated problems such as television standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and computer-intensive operations. With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing, and is generally used because it is not only the most versatile method, but also the cheapest.

Image sensors

Main article: Image sensor

The basis for modern image sensors is metal–oxide–semiconductor (MOS) technology,[5] which originates from the invention of the MOSFET (MOS field-effect transistor) by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959.[6] This led to the development of digital semiconductor image sensors, including the charge-coupled device (CCD) and later the CMOS sensor.[5]

The charge-coupled device was invented by Willard S. Boyle and George E. Smith at Bell Labs in 1969.[7] While researching MOS technology, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next.[5] The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting.[8]

The NMOS active-pixel sensor (APS) was invented by Olympus in Japan during the mid-1980s. This was enabled by advances in MOS semiconductor device fabrication, with MOSFET scaling reaching smaller micron and then sub-micron levels.[9][10] The NMOS APS was fabricated by Tsutomu Nakamura's team at Olympus in 1985.[11] The CMOS active-pixel sensor (CMOS sensor) was later developed by Eric Fossum's team at the NASA Jet Propulsion Laboratory in 1993.[12] By 2007, sales of CMOS sensors had surpassed CCD sensors.[13]

MOS image sensors are widely used in optical mouse technology. The first optical mouse, invented by Richard F. Lyon at Xerox in 1980, used a 5 µm NMOS integrated circuit sensor chip.[14][15] Since the first commercial optical mouse, the IntelliMouse introduced in 1999, most optical mouse devices use CMOS sensors.[16][17]

Image compression

Main article: Image compression

An important development in digital image compression technology was the discrete cosine transform (DCT), a lossy compression technique first proposed by Nasir Ahmed in 1972.[18] DCT compression became the basis for JPEG, which was introduced by the Joint Photographic Experts Group in 1992.[19] JPEG compresses images down to much smaller file sizes, and has become the most widely used image file format on the Internet.[20] Its highly efficient DCT compression algorithm was largely responsible for the wide proliferation of digital images and digital photos,[21] with several billion JPEG images produced every day as of 2015.[22]

Medical imaging techniques produce very large amounts of data, especially from CT, MRI and PET modalities. As a result, storage and communications of electronic image data are prohibitive without the use of compression.[23][24] JPEG 2000 image compression is used by the DICOM standard for storage and transmission of medical images. The cost and feasibility of accessing large image data sets over low or various bandwidths are further addressed by use of another DICOM standard, called JPIP, to enable efficient streaming of the JPEG 2000 compressed image data.[25]

Digital signal processor (DSP)

Main article: Digital signal processor

Electronic signal processing was revolutionized by the wide adoption of MOS technology in the 1970s.[26] MOS integrated circuit technology was the basis for the first single-chip microprocessors and microcontrollers in the early 1970s,[27] and then the first single-chip digital signal processor (DSP) chips in the late 1970s.[28][29] DSP chips have since been widely used in digital image processing.[28]

The discrete cosine transform (DCT) image compression algorithm has been widely implemented in DSP chips, with many companies developing DSP chips based on DCT technology. DCTs are widely used for encoding, decoding, video coding, audio coding, multiplexing, control signals, signaling, analog-to-digital conversion, formatting luminance and color differences, and color formats such as YUV444 and YUV411. DCTs are also used for encoding operations such as motion estimation, motion compensation, inter-frame prediction, quantization, perceptual weighting, entropy encoding, variable encoding, and motion vectors, and decoding operations such as the inverse operation between different color formats (YIQ, YUV and RGB) for display purposes. DCTs are also commonly used for high-definition television (HDTV) encoder/decoder chips.[30]

Medical imaging

Further information: Medical imaging

In 1972, the engineer from British company EMI Housfield invented the X-ray computed tomography device for head diagnosis, which is what is usually called CT (computer tomography). The CT nucleus method is based on the projection of the human head section and is processed by computer to reconstruct the cross-sectional image, which is called image reconstruction. In 1975, EMI successfully developed a CT device for the whole body, which obtained a clear tomographic image of various parts of the human body. In 1979, this diagnostic technique won the Nobel Prize.[4] Digital image processing technology for medical applications was inducted into the Space Foundation Space Technology Hall of Fame in 1994.[31]

As of 2010, 5 billion medical imaging studies had been conducted worldwide.[32][33] Radiation exposure from medical imaging in 2006 made up about 50% of total ionizing radiation exposure in the United States.[34] Medical imaging equipment is manufactured using technology from the semiconductor industry, including CMOS integrated circuit chips, power semiconductor devices, sensors such as image sensors (particularly CMOS sensors) and biosensors, and processors such as microcontrollers, microprocessors, digital signal processors, media processors and system-on-chip devices. As of 2015, annual shipments of medical imaging chips amount to 46 million units and $1.1 billion.[35][36]

Tasks

Digital image processing allows the use of much more complex algorithms, and hence, can offer both more sophisticated performance at simple tasks, and the implementation of methods which would be impossible by analogue means.

In particular, digital image processing is a concrete application of, and a practical technology based on:

Some techniques which are used in digital image processing include:

Digital image transformations

Filtering

Digital filters are used to blur and sharpen digital images. Filtering can be performed by:

  • convolution with specifically designed kernels (filter array) in the spatial domain[37]
  • masking specific frequency regions in the frequency (Fourier) domain

The following examples show both methods:[38]

Filter type Kernel or mask Example
Original Image [ 0 0 0 0 1 0 0 0 0 ] {\displaystyle {\begin{bmatrix}0&0&0\\0&1&0\\0&0&0\end{bmatrix}}}
Spatial Lowpass 1 9 × [ 1 1 1 1 1 1 1 1 1 ] {\displaystyle {\frac {1}{9}}\times {\begin{bmatrix}1&1&1\\1&1&1\\1&1&1\end{bmatrix}}}
Spatial Highpass [ 0 − 1 0 − 1 4 − 1 0 − 1 0 ] {\displaystyle {\begin{bmatrix}0&-1&0\\-1&4&-1\\0&-1&0\end{bmatrix}}}
Fourier Representation Pseudo-code:

image = checkerboard

F = Fourier Transform of image

Show Image: log(1+Absolute Value(F))

Fourier Lowpass
Fourier Highpass

Image padding in Fourier domain filtering

Images are typically padded before being transformed to the Fourier space, the highpass filtered images below illustrate the consequences of different padding techniques:

Zero padded Repeated edge padded

Notice that the highpass filter shows extra edges when zero padded compared to the repeated edge padding.

Filtering code examples

MATLAB example for spatial domain highpass filtering.

img=checkerboard(20); % generate checkerboard % ************************** SPATIAL DOMAIN *************************** klaplace=[0 -1 0; -1 5 -1; 0 -1 0]; % Laplacian filter kernel X=conv2(img,klaplace); % convolve test img with % 3x3 Laplacian kernel figure() imshow(X,[]) % show Laplacian filtered title('Laplacian Edge Detection')

Affine transformations

Affine transformations enable basic image transformations including scale, rotate, translate, mirror and shear as is shown in the following examples:[38]

Transformation Name Affine Matrix Example
Identity [ 1 0 0 0 1 0 0 0 1 ] {\displaystyle {\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}}
Reflection [ − 1 0 0 0 1 0 0 0 1 ] {\displaystyle {\begin{bmatrix}-1&0&0\\0&1&0\\0&0&1\end{bmatrix}}}
Scale [ c x = 2 0 0 0 c y = 1 0 0 0 1 ] {\displaystyle {\begin{bmatrix}c_{x}=2&0&0\\0&c_{y}=1&0\\0&0&1\end{bmatrix}}}
Rotate [ cos ⁡ ( θ ) sin ⁡ ( θ ) 0 − sin ⁡ ( θ ) cos ⁡ ( θ ) 0 0 0 1 ] {\displaystyle {\begin{bmatrix}\cos(\theta )&\sin(\theta )&0\\-\sin(\theta )&\cos(\theta )&0\\0&0&1\end{bmatrix}}} where θ = π/6 =30°
Shear [ 1 c x = 0.5 0 c y = 0 1 0 0 0 1 ] {\displaystyle {\begin{bmatrix}1&c_{x}=0.5&0\\c_{y}=0&1&0\\0&0&1\end{bmatrix}}}

To apply the affine matrix to an image, the image is converted to matrix in which each entry corresponds to the pixel intensity at that location. Then each pixel's location can be represented as a vector indicating the coordinates of that pixel in the image, [x, y], where x and y are the row and column of a pixel in the image matrix. This allows the coordinate to be multiplied by an affine-transformation matrix, which gives the position that the pixel value will be copied to in the output image.

However, to allow transformations that require translation transformations, 3 dimensional homogeneous coordinates are needed. The third dimension is usually set to a non-zero constant, usually 1, so that the new coordinate is [x, y, 1]. This allows the coordinate vector to be multiplied by a 3 by 3 matrix, enabling translation shifts. So the third dimension, which is the constant 1, allows translation.

Because matrix multiplication is associative, multiple affine transformations can be combined into a single affine transformation by multiplying the matrix of each individual transformation in the order that the transformations are done. This results in a single matrix that, when applied to a point vector, gives the same result as all the individual transformations performed on the vector [x, y, 1] in sequence. Thus a sequence of affine transformation matrices can be reduced to a single affine transformation matrix.

For example, 2 dimensional coordinates only allow rotation about the origin (0, 0). But 3 dimensional homogeneous coordinates can be used to first translate any point to (0, 0), then perform the rotation, and lastly translate the origin (0, 0) back to the original point (the opposite of the first translation). These 3 affine transformations can be combined into a single matrix, thus allowing rotation around any point in the image.[39]

Image denoising with Morphology

Mathematical morphology is suitable for denoising images. Structuring element are important in Mathematical morphology.

The following examples are about Structuring elements. The denoise function, image as I, and structuring element as B are shown as below and table.

e.g. ( I ′ ) = [ 45 50 65 40 60 55 25 15 5 ] B = [ 1 2 1 2 1 1 1 0 3 ] {\displaystyle (I')={\begin{bmatrix}45&50&65\\40&60&55\\25&15&5\end{bmatrix}}B={\begin{bmatrix}1&2&1\\2&1&1\\1&0&3\end{bmatrix}}}

Define Dilation(I, B)(i,j) = m a x { I ( i + m , j + n ) + B ( m , n ) } {\displaystyle max\{I(i+m,j+n)+B(m,n)\}}. Let Dilation(I,B) = D(I,B)

D(I', B)(1,1) = m a x ( 45 + 1 , 50 + 2 , 65 + 1 , 40 + 2 , 60 + 1 , 55 + 1 , 25 + 1 , 15 + 0 , 5 + 3 ) = 66 {\displaystyle max(45+1,50+2,65+1,40+2,60+1,55+1,25+1,15+0,5+3)=66}

Define Erosion(I, B)(i,j) = m i n { I ( i + m , j + n ) − B ( m , n ) } {\displaystyle min\{I(i+m,j+n)-B(m,n)\}}. Let Erosion(I,B) = E(I,B)

E(I', B)(1,1) = m i n ( 45 − 1 , 50 − 2 , 65 − 1 , 40 − 2 , 60 − 1 , 55 − 1 , 25 − 1 , 15 − 0 , 5 − 3 ) = 2 {\displaystyle min(45-1,50-2,65-1,40-2,60-1,55-1,25-1,15-0,5-3)=2}

After dilation ( I ′ ) = [ 45 50 65 40 66 55 25 15 5 ] {\displaystyle (I')={\begin{bmatrix}45&50&65\\40&66&55\\25&15&5\end{bmatrix}}} After erosion ( I ′ ) = [ 45 50 65 40 2 55 25 15 5 ] {\displaystyle (I')={\begin{bmatrix}45&50&65\\40&2&55\\25&15&5\end{bmatrix}}}

An opening method is just simply erosion first, and then dilation while the closing method is vice versa. In reality, the D(I,B) and E(I,B) can implemented by Convolution

Structuring element Mask Code Example
Original Image None Use Matlab to read Original image

original = imread('scene.jpg'); image = rgb2gray(original); [r, c, channel] = size(image); se = logical([1 1 1 ; 1 1 1 ; 1 1 1]); [p, q] = size(se); halfH = floor(p/2); halfW = floor(q/2); time = 3; % denoising 3 times with all method

Original lotus
Dilation [ 1 1 1 1 1 1 1 1 1 ] {\displaystyle {\begin{bmatrix}1&1&1\\1&1&1\\1&1&1\end{bmatrix}}} Use Matlab to dilation

imwrite(image, "scene_dil.jpg") extractmax = zeros(size(image), class(image)); for i = 1 : time dil_image = imread('scene_dil.jpg'); for col = (halfW + 1): (c - halfW) for row = (halfH + 1) : (r - halfH) dpointD = row - halfH; dpointU = row + halfH; dpointL = col - halfW; dpointR = col + halfW; dneighbor = dil_image(dpointD:dpointU, dpointL:dpointR); filter = dneighbor(se); extractmax(row, col) = max(filter); end end imwrite(extractmax, "scene_dil.jpg"); end

Denoising picture with dilation method
Erosion [ 1 1 1 1 1 1 1 1 1 ] {\displaystyle {\begin{bmatrix}1&1&1\\1&1&1\\1&1&1\end{bmatrix}}} Use Matlab to erosion

imwrite(image, 'scene_ero.jpg'); extractmin = zeros(size(image), class(image)); for i = 1: time ero_image = imread('scene_ero.jpg'); for col = (halfW + 1): (c - halfW) for row = (halfH +1): (r -halfH) pointDown = row-halfH; pointUp = row+halfH; pointLeft = col-halfW; pointRight = col+halfW; neighbor = ero_image(pointDown:pointUp,pointLeft:pointRight); filter = neighbor(se); extractmin(row, col) = min(filter); end end imwrite(extractmin, "scene_ero.jpg"); end

Opening [ 1 1 1 1 1 1 1 1 1 ] {\displaystyle {\begin{bmatrix}1&1&1\\1&1&1\\1&1&1\end{bmatrix}}} Use Matlab to Opening

imwrite(extractmin, "scene_opening.jpg") extractopen = zeros(size(image), class(image)); for i = 1 : time dil_image = imread('scene_opening.jpg'); for col = (halfW + 1): (c - halfW) for row = (halfH + 1) : (r - halfH) dpointD = row - halfH; dpointU = row + halfH; dpointL = col - halfW; dpointR = col + halfW; dneighbor = dil_image(dpointD:dpointU, dpointL:dpointR); filter = dneighbor(se); extractopen(row, col) = max(filter); end end imwrite(extractopen, "scene_opening.jpg"); end

Closing [ 1 1 1 1 1 1 1 1 1 ] {\displaystyle {\begin{bmatrix}1&1&1\\1&1&1\\1&1&1\end{bmatrix}}} Use Matlab to Closing

imwrite(extractmax, "scene_closing.jpg") extractclose = zeros(size(image), class(image)); for i = 1 : time ero_image = imread('scene_closing.jpg'); for col = (halfW + 1): (c - halfW) for row = (halfH + 1) : (r - halfH) dpointD = row - halfH; dpointU = row + halfH; dpointL = col - halfW; dpointR = col + halfW; dneighbor = ero_image(dpointD:dpointU, dpointL:dpointR); filter = dneighbor(se); extractclose(row, col) = min(filter); end end imwrite(extractclose, "scene_closing.jpg"); end

Denoising picture with closing method

Applications

Further information: Digital imaging and Applications of computer vision

Digital camera images

Digital cameras generally include specialized digital image processing hardware – either dedicated chips or added circuitry on other chips – to convert the raw data from their image sensor into a color-corrected image in a standard image file format. Additional post processing techniques increase edge sharpness or color saturation to create more naturally looking images.

Film

Westworld (1973) was the first feature film to use the digital image processing to pixellate photography to simulate an android's point of view.[40] Image processing is also vastly used to produce the chroma key effect that replaces the background of actors with natural or artistic scenery.

Face detection

Face detection process

Face detection can be implemented with Mathematical morphology, Discrete cosine transform which is usually called DCT, and horizontal Projection (mathematics).

General method with feature-based method

The feature-based method of face detection is using skin tone, edge detection, face shape, and feature of a face (like eyes, mouth, etc.) to achieve face detection. The skin tone, face shape, and all the unique elements that only the human face have can be described as features.

Process explanation

  1. Given a batch of face images, first, extract the skin tone range by sampling face images. The skin tone range is just a skin filter.
    1. Structural similarity index measure (SSIM) can be applied to compare images in terms of extracting the skin tone.
    2. Normally, HSV or RGB color spaces are suitable for the skin filter. E.g. HSV mode, the skin tone range is [0,48,50] ~ [20,255,255]
  2. After filtering images with skin tone, to get the face edge, morphology and DCT are used to remove noise and fill up missing skin areas.
    1. Opening method or closing method can be used to achieve filling up missing skin.
    2. DCT is to avoid the object with tone-like skin. Since human faces always have higher texture.
    3. Sobel operator or other operators can be applied to detect face edge.
  3. To position human features like eyes, using the projection and find the peak of the histogram of projection help to get the detail feature like mouth, hair, and lip.
    1. Projection is just projecting the image to see the high frequency which is usually the feature position.

Improvement of image quality method

Image quality can be influenced by camera vibration, over-exposure, gray level distribution too centralized, and noise, etc. For example, noise problem can be solved by Smoothing method while gray level distribution problem can be improved by histogram equalization.

Smoothing method

In drawing, if there is some dissatisfied color, taking some color around dissatisfied color and averaging them. This is an easy way to think of Smoothing method.

Smoothing method can be implemented with mask and Convolution. Take the small image and mask for instance as below.

image is [ 2 5 6 5 3 1 4 6 1 28 30 2 7 3 2 2 ] {\displaystyle {\begin{bmatrix}2&5&6&5\\3&1&4&6\\1&28&30&2\\7&3&2&2\end{bmatrix}}}

mask is [ 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 ] {\displaystyle {\begin{bmatrix}1/9&1/9&1/9\\1/9&1/9&1/9\\1/9&1/9&1/9\end{bmatrix}}}

After Convolution and smoothing, image is [ 2 5 6 5 3 9 10 6 1 9 9 2 7 3 2 2 ] {\displaystyle {\begin{bmatrix}2&5&6&5\\3&9&10&6\\1&9&9&2\\7&3&2&2\end{bmatrix}}}

Oberseving image[1, 1], image[1, 2], image[2, 1], and image[2, 2].

The original image pixel is 1, 4, 28, 30. After smoothing mask, the pixel becomes 9, 10, 9, 9 respectively.

new image[1, 1] = 1 9 {\displaystyle {\tfrac {1}{9}}} * (image[0,0]+image[0,1]+image[0,2]+image[1,0]+image[1,1]+image[1,2]+image[2,0]+image[2,1]+image[2,2])

new image[1, 1] = floor( 1 9 {\displaystyle {\tfrac {1}{9}}} * (2+5+6+3+1+4+1+28+30)) = 9

new image[1, 2] = floor({ 1 9 {\displaystyle {\tfrac {1}{9}}} * (5+6+5+1+4+6+28+30+2)) = 10

new image[2, 1] = floor( 1 9 {\displaystyle {\tfrac {1}{9}}} * (3+1+4+1+28+30+7+3+2)) = 9

new image[2, 2] = floor( 1 9 {\displaystyle {\tfrac {1}{9}}} * (1+4+6+28+30+2+3+2+2)) = 9

Gray Level Histogram method

Generally, given a gray level histogram from an image as below. Changing the histogram to uniform distribution from an image is usually what we called Histogram equalization.

Figure 1 Figure 2

In discrete time, the area of gray level histogram is ∑ i = 0 k H ( p i ) {\displaystyle \sum _{i=0}^{k}H(p_{i})}(see figure 1) while the area of uniform distribution is ∑ i = 0 k G ( q i ) {\displaystyle \sum _{i=0}^{k}G(q_{i})}(see figure 2). It is clear that the area will not change, so ∑ i = 0 k H ( p i ) = ∑ i = 0 k G ( q i ) {\displaystyle \sum _{i=0}^{k}H(p_{i})=\sum _{i=0}^{k}G(q_{i})}.

From the uniform distribution, the probability of q i {\displaystyle q_{i}} is N 2 q k − q 0 {\displaystyle {\tfrac {N^{2}}{q_{k}-q_{0}}}} while the 0 < i < k {\displaystyle 0<i<k}

In continuous time, the equation is ∫ q 0 q N 2 q k − q 0 d s = ∫ p 0 p H ( s ) d s {\displaystyle \displaystyle \int _{q_{0}}^{q}{\tfrac {N^{2}}{q_{k}-q_{0}}}ds=\displaystyle \int _{p_{0}}^{p}H(s)ds}.

Moreover, based on the definition of a function, the Gray level histogram method is like finding a function f {\displaystyle f} that satisfies f(p)=q.

Improvement method Issue Before improvement Process After improvement
Smoothing method noise

with Matlab, salt & pepper with 0.01 parameter is added

to the original image in order to create a noisy image.

  1. read image and convert image into grayscale
  2. convolution the graysale image with the mask [ 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 ] {\displaystyle {\begin{bmatrix}1/9&1/9&1/9\\1/9&1/9&1/9\\1/9&1/9&1/9\end{bmatrix}}}
  3. denoisy image will be the result of step 2.
Histogram Equalization Gray level distribution too centralized Refer to the Histogram equalization

See also

References

  1. A Brief, Early History of Computer Graphics in Film Archived 17 July 2012 at the Wayback Machine, Larry Yaeger, 16 August 2002 (last update), retrieved 24 March 2010

Further reading

External links

Computer vision

Categories
Technologies
Applications

Digital signal processing

Information processing

Authority control databases: National Edit this at Wikidata

Image Analysis

Background Removal Tools


Links

https://en.wikipedia.org/wiki/Category:Image_processing

Subcategories``ёёё]]]

`0-9

` ► 3D computer graphics (16 C, 134 P, 2 F)

`A`B

`C

` ► Computer graphic artifacts (1 C, 14 P)

`D

` ► Digital photography (4 C, 123 P)

`E`F

` ► Feature detection (computer vision) (47 P)

` ► Graphics file formats (5 C, 236 P)

`G`H`I

` ► Image compression (1 C, 29 P)

► Image noise reduction techniques (13 P)

► Image processors (1 C, 5 P)

► Image segmentation (25 P)

► Interpolation (2 C, 53 P)

`J`K`L`M

` ► Mathematical morphology (20 P)

► Medical imaging (10 C, 168 P, 1 F)

`N`O`P`Q`R

`S

` ► Sony image processing (1 C, 15 P)

► Stereophotogrammetry (11 P)

`T

`U`V

`W

` ► Wavelets (3 C, 55 P)

`X`Y`Z

Index

```[[[ёёё

` Image processing

`0–9

` 3D reconstruction from multiple images

`A

` Abel transform

ActionShot

Acutance

Adaptive histogram equalization

Alpha to coverage

Analog image processing

Anisotropic diffusion

Spatial anti-aliasing

`B

Background Removal

` Background subtraction

Bicubic interpolation

Binary image

Black balance

Blend modes

Boundary vector field

Box blur

`C

` Camera interface

The Cancer Imaging Archive (TCIA)

Charge-coupled device

Chirplet transform

Circular convolution

Circular thresholding

Clone tool

Closest point method

Co-occurrence matrix

Color

Color balance

Color image

Color image pipeline

Color layout descriptor

Color mapping

Color moments

Color normalization

Color quantization

Color Space

Color structure code

Color vision

Comparison gallery of image scaling algorithms

Computational photography (artistic)

Computer Vision

Foveated rendering

Human visual system model

Foveated imaging

Contextual image classification

Contrast-to-noise ratio

Convolution

Curve (tonality)

Curvelet

`D

` Data cube

Deblurring

Deconvolution

Decorrelation

Deep image compositing

Deeplearning4j

Digital image

Digital image processing

Direct Graphics Access

Directional Cubic Convolution Interpolation

Distance transform

Document layout analysis

Document mosaicing

Drizzle (image processing)

Dynamic imaging

`E

` Edge detection

Edge enhancement

Edge-preserving smoothing

Elongatedness

Epitome (data processing)

Epsilon photography

Erosion (morphology)

Error diffusion

Exposure Fusion

Extended Depth of Field

`F

` False radiosity

Fermi filter

Fiducial marker

Flat-field correction

Floyd–Steinberg dithering

Focus recovery based on the linear canonical transform

Focus stacking

Framebuffer

Free boundary condition

`G

` Gabor filter

Gaussian blur

Generalised Hough transform

Geo warping

Gigamacro

Gradient-domain image processing

Grassfire transform

Gray level size zone matrix

`H

` Halide (programming language)

HDCI

Heat kernel signature

Histogram equalization

Histogram matching

Homomorphic filtering

Hqx

`I

` Illumination (image)

Image analogy

Image derivatives

Image differencing

Image editing

Image formation

Image geometry correction

Image gradient

Image histogram

Image rectification

Image resolution

Image restoration

Image scaling

Image stitching

Image warping

Imaging phantom

Imaging technology

Implicit Shape Model

Inpainting

Intrinsic dimension

Iterative reconstruction

`J

` Jaggies

`K

` Kernel (image processing)

Kuwahara filter

`L

` Landweber iteration

Layers (digital image editing)

LCD crosstalk

Lenna

Level set (data structures)

Level set method

Line pair

List of Fourier-related transforms

List of transforms

Lucy–Hook coaddition method

`M

` Masking (in art)

Mathematical morphology

Medical imaging

Medical intelligence and language engineering lab

Microscope image processing

Minimum resolvable contrast

User:Mkai91/sandbox/Deep Learning Studio

Multi-scale approaches

Multiple buffering

Multiple Satellite Imaging

Multisample anti-aliasing

`N

` N-jet

Negacyclic convolution

Neighborhood operation

Network Abstraction Layer

Non-local means

Non-separable wavelet

Normalization (image processing)

`O

` Object (image processing)

Object removal

Objective vision

Opponent process

Optical granulometry

Ordered dithering

Oversampled binary image sensor

`P

` Pandemonium architecture

Phase congruency

Phase stretch transform

Photoanalysis

Picture function

Pixels

Pixel art scaling algorithms

Pixel aspect ratio

Poisson image editing

Polynomial texture mapping

Principal geodesic analysis

Progressive Graphics File

Projection-slice theorem

Pulse-coupled networks

Pyramid (image processing)

`R

` Randomized Hough transform

RapidMiner

Reconstruction from Projections

Resel

Resolution enhancement technology

Richardson–Lucy deconvolution

Rutt/Etra Video Synthesizer

`S

` Saliency map

Scale space

Scale space implementation

Scale-space axioms

Scan line

Scene statistics

Scientific Working Group – Imaging Technology

Scribe Software

Seam carving

Separable filter

Shadow and highlight enhancement

Shape analysis (digital geometry)

Shape factor (image analysis and microscopy)

Shearlet

Shepp–Logan phantom

Signal-to-noise ratio (imaging)

Signal transfer function

Single particle analysis

Smoothing

Irwin Sobel

Softwarp

Spectral shape analysis

S

` Spherical basis

Stairstep interpolation

Standard test image

Steerable filter

Structural Similarity

Sub-pixel resolution

Super-resolution imaging

Super-resolution optical fluctuation imaging

Supersampling

`T

` Teleradiology

Template matching

Tensor operator

Time delay and integration

Topological skeleton

Total variation denoising

Triggertrap

`U

` Uncropping

Unimodal thresholding

Unsharp masking

`V

` Video synopsis

VisionMap A3 Digital Mapping System

Visual computing

VisualRank

`Y

` YaDICs

Pages in Other Languages

Categories:

Computer graphics

Computer graphics algorithms

Computer vision

Digital signal processing

Signal processing

Multidimensional signal processing

Applied statistics

Comments (0)

You don't have permission to comment on this page.