What is Cost Function?

0

What is Cost Function?

(toc)#title=(Table Of Content)

Cost Function

We can measure the delicacy of our thesis function by using a cost function. This takes an average difference( actually a fancier interpretation of an average) of all the results of the thesis with inputs from x's and the factual yield y's. 



To break it apart, it is \frac{1}{2} \bar{x} where \bar{x} is the mean of the squares of h_\theta (x_{i}) - y_{i} , or the difference between the predicted value and the actual value.

This function is otherwise called the "Squared error function", or "Mean squared error". The mean is halved \left(\frac{1}{2}\right) as a convenience for the computation of the gradient descent, the derivative term of the square function will cancel out the \frac{1}{2} term. The following image summarizes what the cost function does:


Watch Video:

Subscribe to the channel and like the video.

Formula

In this video we'll define a thing called the cost function, this will let us figure out how to fit the formal possible straight line to our data. 

Linear Progression:

 In linear progression, we have a training set that I showed then flash back on note M was the number of training exemplifications, so perhaps m equals 47. And the form of our thesis, which we use to make prognostications is this linear function. 

 To introduce a bit more language, these theta zero and theta one stabilize what I call the model's parameters. And what we are going to do in this video is talk about how to go about choosing these two parameter values, theta 0 and theta 1. With different choices of the parameters theta 0 and theta 1, we get a different thesis and different thesis functions. 
 So direct regression, what we are going to do is, I am going to want to break a minimization problem. So I will write minimize over theta0 theta1. So, this cost function is also called the squared error function. When occasionally called the squared error cost function and it turns out that why we take the places of the errors. 

Conclusion:

Other cost functions will work enough well. But the square cost function is presumably the most generally used one for regression problems. 

Latterly in this class, we'll talk about indispensable cost functions as well, but this choice that we just had should be enough reasonable thing to try for utmost linear regression problems. 



Post a Comment

0Comments
Post a Comment (0)

#buttons=(Accept !) #days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !