




























































































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Question bank for digital image processing
Typology: Exams
1 / 165
This page cannot be seen from the preview
Don't miss anything!
Brightness is a subjective, psychological measure of perceived intensity. Brightness is practically impossible to measure objectively. It is relative. For example, a burning candle in a darkened room will appear bright to the viewer; it will not appear bright in full sunshine.
Both sounds and images can be considered as signals, in one or two dimensions, respectively. Sound can be described as a fluctuation of the acoustic pressure in time, while images are spatial distributions of values of luminance or color, the latter being described in its RGB or HSB components. Any signal, in order to be processed by numerical computing devices, have to be reduced to a sequence of discrete samples , and each sample must be represented using a finite number of bits. The first operation is called sampling , and the second operation is called quantization of the domain of real numbers. 2.1 1-D: Sounds Sampling is, for one-dimensional signals, the operation that transforms a continuous-time signal (such as, for instance, the air pressure fluctuation at the entrance of the ear canal) into a discrete-time signal, that is a sequence of numbers. The discrete- time signal gives the values of the continuous-time signal read at intervals of T seconds. The reciprocal of the sampling interval is called sampling rate F s= 1/T
. In this module we do not explain the theory of sampling, but we rather describe its manifestations. For a a more extensive yet accessible treatment, we point to the Introduction to Sound Processing. For our purposes, the process of sampling a 1-D signal can be reduced to three facts and a theorem. Fact 1: The Fourier Transform of a discrete-time signal is a function (called spectrum ) of the continuous variable ω , and it is periodic with period 2 π. Given a value of ω , the Fourier transform gives back a complex number that can be interpreted as magnitude and phase (translation in time) of the sinusoidal ^ component at that frequency.
Let us assume we have a continuous distribution, on a plane, of values of luminance or, more simply stated, an image. In order to process it using a computer we have to reduce it to a sequence of numbers by means of sampling. There are several ways to sample an image, or read its values of luminance at discrete points. The simplest way is to use a regular grid, with spatial steps X e Y. Similarly to what we did for sounds, we define the spatial sampling rates F X= 1/X F Y= 1/Y As in the one-dimensional case, also for two-dimensional signals, or images, sampling can be described by three facts and a theorem. Fact 1: The Fourier Transform of a discrete-space signal is a function (called spectrum ) of two continuous variables ω X and ω Y, and it is periodic in two dimensions with periods 2 π. Given a couple of values ω X and ω Y, the Fourier transform gives back a complex number that can be interpreted as magnitude and phase (translation in space) of the sinusoidal component at such spatial frequencies. Fact 2: Sampling the continuous-space signal s ( x , y ) with the regular grid of steps X , Y , gives a discrete-space signal s ( m , n ) = s ( mX , nY ) , which is a function of the discrete variables m and n . Fact 3: Sampling a continuous-space signal with spatial frequencies F X and F Y gives a discrete-space signal whose spectrum is the periodic replication along the grid of steps F X and F Y of the original signal spectrum. The Fourier variables ω X and ω Y correspond to the frequencies (in cycles per meter) represented by the variables f X= ω X/ 2 πX And fy= ω Y / 2 πY
. The Figure 2 shows an example of spectrum of a two-dimensional sampled signal. There, the continuous-space signal had all and only the frequency components included in the central hexagon. The hexagonal shape of the spectral support (region of non-null spectral energy) is merely illustrative. The replicas of the original spectrum are often called spectral images.
Spectrum of a sampled image Figure 2 Given the above facts, we can have an intuitive understanding of the Sampling Theorem.
With the adjective "digital" we indicate those systems that work on signals that are represented by numbers, with the (finite) precision that computing systems allow. Up to now we have considered discrete-time and discrete-space signals as if they were collections of infinite-precision numbers, or real numbers. Unfortunately, computers only allow to represent finite subsets of rational numbers. This means that our signals are subject to quantization. For our purposes, the most interesting quantization is the linear one, which is usually occurring in the process of conversion of an analog signal into the digital domain. If the memory word dedicated to storing a number is made of b bits, then the range of such number is discretized into 2 b quantization levels. Any value that is found between two quantization levels can be approximated by truncation or rounding to the closest value. The Figure 3 shows an example of quantization with representation on 3 bits in two's complement. Sampling and quantization of an analog signal
The number of distinct colors that can be represented by a pixel depends on the number of bits per pixel (bpp). A 1 bpp image uses 1-bit for each pixel, so each pixel can be either on or off. Each additional bit doubles the number of colors available, so a 2 bpp image can have 4 colors, and a 3 bpp image can have 8 colors: 1 bpp, 2 1 = 2 colors (monochrome) 2 bpp, 2^2 = 4 colors 3 bpp, 2^3 = 8 colors 8 bpp, 2^8 = 256 colors 16 bpp, 2 16 = 65,536 colors ("Highcolor" ) 24 bpp, 2 24 ≈ 16.8 million colors ("Truecolor") For color depths of 15 or more bits per pixel, the depth is normally the sum of the bits allocated to each of the red, green, and blue components. Highcolor, usually meaning 16 bpp, normally has five bits for red and blue, and six bits for green, as the human eye is more sensitive to errors in green than in the other two primary colors. For applications involving transparency, the 16 bits may be divided into five bits each of red, green, and available: this means that each 24-bit pixel has an extra 8 bits to describe its blue, with one bit left for transparency. A 24-bit depth allows 8 bits per component. On some systems, 32-bit depth is opacity (for purposes of combining with another image). Selected standard display resolutions include: Name Megapixels Width x Height CGA 0.064 320× EGA 0.224 640× VGA 0.3 640× SVGA 0.5 800× XGA 0.8 1024× SXGA 1.3 1280× UXGA 1.9 1600× WUXGA 2.3 1920×
Transform theory plays a fundamental role in image processing, as working with the transform of an image instead of the image itself may give us more insight into the properties of the image. Two dimensional transforms are applied to image enhancement, restoration, encoding and description.
5.1.1 One dimensional signals For a one dimensional sequence { f ( x ), 0 x N 1} represented as a vector f f (0) f (1) f ( N 1) T^ of size (^) N , a transformation may be written as N 1 g T f g ( u ) T ( u , x ) f ( x ), 0 u N^ ^1 x 0 where g ( u ) is the transform (or transformation) of f ( x ) , and T ( u , x ) is the so called forward transformation kernel. Similarly, the inverse transform is the relation N 1 f ( x ) I ( x , u ) g ( u ), 0 x N 1 u 0 or written in a matrix form f I g T ^1 g where I ( x , u ) is the so called inverse transformation kernel. If I T ^1 T T