Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Introduction to Digital Signals-Signals and Systems-Lecture 04 Slides-Electrical and Computer Engineering, Slides of Signals and Systems Theory

Introduction to Digital Signals, Lossless Compression, Lossy Compression, Signal Amplitude as a Random Variable, Triangular Distribution, Gaussian Distribution, Uniform Distribution, Mean Value of a Random Signal, DC Value, Variance and Standard Deviation, Covariance and Correlation, White Gaussian Noise, Signals and Systems, Joseph Picone, Electrical and Computer Engineering, Mississippi State University, United States of America.

Typology: Slides

2011/2012

Uploaded on 02/17/2012

oliver97
oliver97 🇺🇸

4.4

(44)

94 documents

1 / 9

Toggle sidebar

Related documents


Partial preview of the text

Download Introduction to Digital Signals-Signals and Systems-Lecture 04 Slides-Electrical and Computer Engineering and more Slides Signals and Systems Theory in PDF only on Docsity! ECE 8443 – Pattern RecognitionECE 3163 – Signals and Systems • Objectives: Definition of a Digital Signal Random Variables Probability Density Functions Means and Moments White Noise • Resources: Wiki: Probability Wiki: Random Variable S.G.: Random Processes Tutorial Blogspot: Probability Resources LECTURE 04: BASIC STATISTICAL MODELING OF DIGITAL SIGNALS Audio:URL: ECE 3163: Lecture 04, Slide 1 • A discrete-time signal, x[n], is discrete in time but continuous in amplitude (e.g., the amplitude is represented as a real number). • We often approximate real numbers in digital electronics using a floating- point numeric representation (e.g., 4-byte IEEE floating-point numbers).  For example, a 4-byte floating point number uses 32 bits in its representation, which allows it to achieve 2**32 different values.  The simplest way to assign these values would be to use a linear representation: divide the range [Rmin,Rmax] into 2**32 equal steps. This is often referred to as linear quantization. Does this make sense?  Floating point numbers actually use a logarithmic representation consisting of an exponent and mantissa (e.g. IEEE floating-point format). • For many applications, such as encoding of audio and images, 32 bits per sample is overkill because humans can’t distinguish that many unique levels. • Lossless compression (e.g., gif images) techniques reduce the number of bits without introducing any distortion. • Lossy compression (e.g., jpeg or mp3) techniques reduce the number of bits significantly but introduce some amount of distortion. • Signals whose amplitude and time scale are discrete are referred to as digital signals and form the basis for the field of digital signal processing. Introduction to Digital Signals ECE 3163: Lecture 04, Slide 4 Mean Value Of A Random Signal (DC Value) • The mean value of a random variable can be computed by integrating its probability density function: • Example: Uniformly Distributed Continuous Random Variable • The mean value of a discrete random variable can be computed in an analogous manner: • But this can also be computed using the sample mean (N = no. of samples): In fact, in more advanced coursework, you will learn this is one of many ways to estimate the mean of a random variable (e.g., maximum likelihood). • For a discrete uniformly distributed random variable: • Soon we will learn that the average value of a signal is its DC value, or its frequency response at 0 Hz. dxxxpxEx )(}{ 2 1 )0 2 1 ( 2 )1()(}{ 1 0 21 0 x dxxdxxxpxEx 2 )1( 2 )1)(( ) 1 ()() 1 ( 1 )( 1 0 1 0 MMM M i MM ixpx M i M ii iix i iix xpxxE )(][ i ix x N xE 1 ][ ECE 3163: Lecture 04, Slide 5 Variance and Standard Deviation • The variance of a continuous random variable can be computed as: • For a uniformly distributed random variable: • For a discrete random variable: • Following the analogy of a sample mean, this can be estimated from the data: • The standard deviation is just the square root of the variance. • Soon we will see that the variance is related to the correlation structure of the signal, the power of the signal, and the power spectral density. dxxpxxE xxx )()(}){( 222 12 1 24 1 24 1 3 ) 2 1 ( 3 ) 2 1 ( 3 ) 2 1 (( )1() 2 1 ( )()(}){( 331 0 3 1 0 2 222 x dxx dxxpxxE xxx )()(}){( 222 ix i ixix xpxxE 22 )][( 1 x n x nx N ECE 3163: Lecture 04, Slide 6 Covariance and Correlation • We can define the covariance between two random variables as: • For time series, when y is simply a delayed version of x, this can be reduced to a simpler form known as the correlation of a signal: • For a discrete random variable representing the samples of a time series, we can estimate this directly from the signal as: • The correlation is often referred to as a second-order moment, with the expectation being the first-order moment. It is possible to define higher-order moments as well: • Two random variables are said to be uncorrelated if . • For a Gaussian random variable, only the mean and variance are non-zero; all other higher-order moments are equal to zero. dxdyyxpyxyxEyx yxyx ),())(()})({(),cov( )}()({ txtxERxx n x knxnx N kR ][][ 1 0),cov( yx n x nxmxlx N nmlR ][][][ 1 ,,
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved