## Wednesday, September 12, 2018

Posted by Sandeep Aparajit at 10:20 PM 0 comments Links to this post

## Friday, September 7, 2018

## Friday, June 14, 2013

### Tutorial: Conditional Random Field (CRF)

Conditional Random Field (CRF) is a probabilistic model for labeling a sequence of words. CRF has found applications in address parsing, NER (names entity recognition), NP chunking etc.

In order to understand how CRF works, it is important to understand the basic concepts like:

This is the version 1.0 of the presentation I'm developing. I'm planning to add more details like Baum-Welch algorithm, Perceptron/SGD training algorithm, Regularization etc.

Stay tuned and feel free to suggest additional CRF topic to be included.

In order to understand how CRF works, it is important to understand the basic concepts like:

- what are probabilistic models
- what are graphical models
- basics of probability
- basic probabilistic models such as naive bayes, maximum entropy and HMM

**Open Fullscreen**This is the version 1.0 of the presentation I'm developing. I'm planning to add more details like Baum-Welch algorithm, Perceptron/SGD training algorithm, Regularization etc.

Stay tuned and feel free to suggest additional CRF topic to be included.

Posted by Sandeep Aparajit at 6:00 AM 6 comments Links to this post

Labels: Conditional Random Field , CRF , HMM , How to...? , Machine Learning , Maximum Entropy , Naive Bayes

## Friday, March 8, 2013

### Basics of Machine Learning (Video)

This is a basic video course on machine learning (ML) that covers the basic theory, algorithms, and applications.

- Lecture 1:
**The Learning Problem** - Lecture 2:
**Is Learning Feasible?** - Lecture 3:
**The Linear Model I** - Lecture 4:
**Error and Noise** - Lecture 5:
**Training versus Testing** - Lecture 6:
**Theory of Generalization** - Lecture 7:
**The VC Dimension** - Lecture 8:
**Bias-Variance Tradeoff** - Lecture 9:
**The Linear Model II** - Lecture 10:
**Neural Networks** - Lecture 11:
**Overfitting** - Lecture 12:
**Regularization** - Lecture 13:
**Validation** - Lecture 14:
**Support Vector Machines** - Lecture 15:
**Kernel Methods** - Lecture 16:
**Radial Basis Functions** - Lecture 17:
**Three Learning Principles** - Lecture 18:
**Epilogue**

**theory - mathematical**

technique - practical

analysis - conceptual

technique - practical

analysis - conceptual

Posted by Sandeep Aparajit at 7:18 AM 0 comments Links to this post

Labels: Machine Learning

## Monday, January 23, 2012

### Differential & Integral Calculus (lim, ∂, ∫) [Video Lectures]

The Calculus is an important branch of mathematics. In this post we'll go through the lectures on differential and integral calculus, which are one of the hard problems students face during their college.

The "Derivative" is a measure of how a function changes as its input changes.Loosely speaking, a derivative can be thought of as how much one quantity is changing in response to changes in some other quantity; for example, the derivative of the position of a moving object with respect to time is the object's instantaneous velocity. Or how fast the temperature of a room is changing with respect to time. The derivative of a function at a chosen input value describes the best linear approximation of the function near that input value. For a real-valued function of a single real variable, the derivative at a point equals the slope of the tangent line to the graph of the function at that point. In higher dimensions, the derivative of a function at a point is a linear transformation called the linearization.

Understanding "Limits" is important before starting with derivatives or integration. Informally, a function f assigns an output f(x) to every input x. The function has a limit L at an input p if f(x) is "close" to L whenever x is "close" to p. In other words, f(x) becomes closer and closer to L as x moves closer and closer to p. More specifically, when f is applied to each input sufficiently close to p, the result is an output value that is arbitrarily close to L. If the inputs "close" to p are taken to values that are very different, the limit is said to not exist.

Understanding "Limits" is important before starting with derivatives or integration. Informally, a function f assigns an output f(x) to every input x. The function has a limit L at an input p if f(x) is "close" to L whenever x is "close" to p. In other words, f(x) becomes closer and closer to L as x moves closer and closer to p. More specifically, when f is applied to each input sufficiently close to p, the result is an output value that is arbitrarily close to L. If the inputs "close" to p are taken to values that are very different, the limit is said to not exist.

Above stuff seems confusing, let's take a deeper look at these concepts using the video lectures:

A Big Thanks to UCLA for posting these lectures.

Posted by Sandeep Aparajit at 2:20 PM 0 comments Links to this post

Labels: Machine Learning , Maths

Subscribe to:
Posts (Atom)